00:00:00.001 Started by upstream project "autotest-nightly" build number 4282 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3645 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.079 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.170 Using shallow fetch with depth 1 00:00:00.170 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.170 > git --version # timeout=10 00:00:00.224 > git --version # 'git version 2.39.2' 00:00:00.224 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.444 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.456 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.467 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.467 > git config core.sparsecheckout # timeout=10 00:00:04.477 > git read-tree -mu HEAD # timeout=10 00:00:04.491 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.509 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.509 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.584 [Pipeline] Start of Pipeline 00:00:04.597 [Pipeline] library 00:00:04.599 Loading library shm_lib@master 00:00:04.599 Library shm_lib@master is cached. Copying from home. 00:00:04.615 [Pipeline] node 00:00:04.629 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.631 [Pipeline] { 00:00:04.640 [Pipeline] catchError 00:00:04.641 [Pipeline] { 00:00:04.655 [Pipeline] wrap 00:00:04.663 [Pipeline] { 00:00:04.671 [Pipeline] stage 00:00:04.673 [Pipeline] { (Prologue) 00:00:04.693 [Pipeline] echo 00:00:04.694 Node: VM-host-SM38 00:00:04.701 [Pipeline] cleanWs 00:00:04.712 [WS-CLEANUP] Deleting project workspace... 00:00:04.712 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.719 [WS-CLEANUP] done 00:00:04.915 [Pipeline] setCustomBuildProperty 00:00:04.981 [Pipeline] httpRequest 00:00:05.332 [Pipeline] echo 00:00:05.334 Sorcerer 10.211.164.20 is alive 00:00:05.344 [Pipeline] retry 00:00:05.346 [Pipeline] { 00:00:05.357 [Pipeline] httpRequest 00:00:05.362 HttpMethod: GET 00:00:05.362 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.363 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.364 Response Code: HTTP/1.1 200 OK 00:00:05.365 Success: Status code 200 is in the accepted range: 200,404 00:00:05.366 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.844 [Pipeline] } 00:00:05.856 [Pipeline] // retry 00:00:05.860 [Pipeline] sh 00:00:06.141 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.154 [Pipeline] httpRequest 00:00:06.678 [Pipeline] echo 00:00:06.679 Sorcerer 10.211.164.20 is alive 00:00:06.685 [Pipeline] retry 00:00:06.686 [Pipeline] { 00:00:06.694 [Pipeline] httpRequest 00:00:06.698 HttpMethod: GET 00:00:06.699 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:06.699 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:06.700 Response Code: HTTP/1.1 200 OK 00:00:06.701 Success: Status code 200 is in the accepted range: 200,404 00:00:06.701 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:13.416 [Pipeline] } 00:01:13.434 [Pipeline] // retry 00:01:13.442 [Pipeline] sh 00:01:13.732 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:16.291 [Pipeline] sh 00:01:16.599 + git -C spdk log --oneline -n5 00:01:16.599 d47eb51c9 bdev: fix a race between reset start and complete 00:01:16.599 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:16.599 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:16.599 4bcab9fb9 correct kick for CQ full case 00:01:16.599 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:16.618 [Pipeline] writeFile 00:01:16.634 [Pipeline] sh 00:01:16.933 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:16.944 [Pipeline] sh 00:01:17.222 + cat autorun-spdk.conf 00:01:17.222 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.222 SPDK_TEST_NVME=1 00:01:17.222 SPDK_TEST_FTL=1 00:01:17.222 SPDK_TEST_ISAL=1 00:01:17.222 SPDK_RUN_ASAN=1 00:01:17.222 SPDK_RUN_UBSAN=1 00:01:17.222 SPDK_TEST_XNVME=1 00:01:17.222 SPDK_TEST_NVME_FDP=1 00:01:17.222 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.228 RUN_NIGHTLY=1 00:01:17.230 [Pipeline] } 00:01:17.245 [Pipeline] // stage 00:01:17.262 [Pipeline] stage 00:01:17.265 [Pipeline] { (Run VM) 00:01:17.279 [Pipeline] sh 00:01:17.650 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:17.650 + echo 'Start stage prepare_nvme.sh' 00:01:17.650 Start stage prepare_nvme.sh 00:01:17.650 + [[ -n 10 ]] 00:01:17.650 + disk_prefix=ex10 00:01:17.650 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:17.650 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:17.650 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:17.650 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.650 ++ SPDK_TEST_NVME=1 00:01:17.650 ++ SPDK_TEST_FTL=1 00:01:17.650 ++ SPDK_TEST_ISAL=1 00:01:17.650 ++ SPDK_RUN_ASAN=1 00:01:17.650 ++ SPDK_RUN_UBSAN=1 00:01:17.650 ++ SPDK_TEST_XNVME=1 00:01:17.650 ++ SPDK_TEST_NVME_FDP=1 00:01:17.650 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.650 ++ RUN_NIGHTLY=1 00:01:17.650 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:17.650 + nvme_files=() 00:01:17.650 + declare -A nvme_files 00:01:17.650 + backend_dir=/var/lib/libvirt/images/backends 00:01:17.650 + nvme_files['nvme.img']=5G 00:01:17.650 + nvme_files['nvme-cmb.img']=5G 00:01:17.650 + nvme_files['nvme-multi0.img']=4G 00:01:17.650 + nvme_files['nvme-multi1.img']=4G 00:01:17.650 + nvme_files['nvme-multi2.img']=4G 00:01:17.650 + nvme_files['nvme-openstack.img']=8G 00:01:17.650 + nvme_files['nvme-zns.img']=5G 00:01:17.650 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:17.650 + (( SPDK_TEST_FTL == 1 )) 00:01:17.650 + nvme_files["nvme-ftl.img"]=6G 00:01:17.650 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:17.650 + nvme_files["nvme-fdp.img"]=1G 00:01:17.650 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:17.650 + for nvme in "${!nvme_files[@]}" 00:01:17.650 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:17.650 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.650 + for nvme in "${!nvme_files[@]}" 00:01:17.650 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:18.222 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:18.222 + for nvme in "${!nvme_files[@]}" 00:01:18.222 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:18.222 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.222 + for nvme in "${!nvme_files[@]}" 00:01:18.222 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:18.482 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:18.482 + for nvme in "${!nvme_files[@]}" 00:01:18.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:18.482 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.482 + for nvme in "${!nvme_files[@]}" 00:01:18.483 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:18.483 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.483 + for nvme in "${!nvme_files[@]}" 00:01:18.483 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:18.745 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.745 + for nvme in "${!nvme_files[@]}" 00:01:18.745 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:18.745 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:19.005 + for nvme in "${!nvme_files[@]}" 00:01:19.005 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:19.268 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.268 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:19.531 + echo 'End stage prepare_nvme.sh' 00:01:19.531 End stage prepare_nvme.sh 00:01:19.550 [Pipeline] sh 00:01:19.829 + DISTRO=fedora39 00:01:19.829 + CPUS=10 00:01:19.829 + RAM=12288 00:01:19.829 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:19.829 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:19.829 00:01:19.829 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:19.829 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:19.829 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:19.829 HELP=0 00:01:19.829 DRY_RUN=0 00:01:19.829 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:19.829 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:19.829 NVME_AUTO_CREATE=0 00:01:19.829 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:19.829 NVME_CMB=,,,, 00:01:19.829 NVME_PMR=,,,, 00:01:19.829 NVME_ZNS=,,,, 00:01:19.829 NVME_MS=true,,,, 00:01:19.829 NVME_FDP=,,,on, 00:01:19.829 SPDK_VAGRANT_DISTRO=fedora39 00:01:19.829 SPDK_VAGRANT_VMCPU=10 00:01:19.829 SPDK_VAGRANT_VMRAM=12288 00:01:19.829 SPDK_VAGRANT_PROVIDER=libvirt 00:01:19.829 SPDK_VAGRANT_HTTP_PROXY= 00:01:19.829 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:19.829 SPDK_OPENSTACK_NETWORK=0 00:01:19.829 VAGRANT_PACKAGE_BOX=0 00:01:19.829 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:19.829 FORCE_DISTRO=true 00:01:19.829 VAGRANT_BOX_VERSION= 00:01:19.829 EXTRA_VAGRANTFILES= 00:01:19.829 NIC_MODEL=e1000 00:01:19.829 00:01:19.829 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:19.829 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:22.372 Bringing machine 'default' up with 'libvirt' provider... 00:01:22.633 ==> default: Creating image (snapshot of base box volume). 00:01:22.894 ==> default: Creating domain with the following settings... 00:01:22.894 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731997514_805509340d9e460b6dbb 00:01:22.894 ==> default: -- Domain type: kvm 00:01:22.894 ==> default: -- Cpus: 10 00:01:22.894 ==> default: -- Feature: acpi 00:01:22.894 ==> default: -- Feature: apic 00:01:22.894 ==> default: -- Feature: pae 00:01:22.894 ==> default: -- Memory: 12288M 00:01:22.894 ==> default: -- Memory Backing: hugepages: 00:01:22.894 ==> default: -- Management MAC: 00:01:22.894 ==> default: -- Loader: 00:01:22.894 ==> default: -- Nvram: 00:01:22.894 ==> default: -- Base box: spdk/fedora39 00:01:22.894 ==> default: -- Storage pool: default 00:01:22.894 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731997514_805509340d9e460b6dbb.img (20G) 00:01:22.894 ==> default: -- Volume Cache: default 00:01:22.894 ==> default: -- Kernel: 00:01:22.894 ==> default: -- Initrd: 00:01:22.894 ==> default: -- Graphics Type: vnc 00:01:22.894 ==> default: -- Graphics Port: -1 00:01:22.894 ==> default: -- Graphics IP: 127.0.0.1 00:01:22.894 ==> default: -- Graphics Password: Not defined 00:01:22.894 ==> default: -- Video Type: cirrus 00:01:22.894 ==> default: -- Video VRAM: 9216 00:01:22.894 ==> default: -- Sound Type: 00:01:22.894 ==> default: -- Keymap: en-us 00:01:22.894 ==> default: -- TPM Path: 00:01:22.894 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:22.894 ==> default: -- Command line args: 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:22.894 ==> default: -> value=-drive, 00:01:22.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:22.894 ==> default: -> value=-drive, 00:01:22.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:22.894 ==> default: -> value=-drive, 00:01:22.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.894 ==> default: -> value=-drive, 00:01:22.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.894 ==> default: -> value=-drive, 00:01:22.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:22.894 ==> default: -> value=-drive, 00:01:22.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:22.894 ==> default: -> value=-device, 00:01:22.894 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.894 ==> default: Creating shared folders metadata... 00:01:22.894 ==> default: Starting domain. 00:01:24.806 ==> default: Waiting for domain to get an IP address... 00:01:42.925 ==> default: Waiting for SSH to become available... 00:01:42.925 ==> default: Configuring and enabling network interfaces... 00:01:44.825 default: SSH address: 192.168.121.70:22 00:01:44.825 default: SSH username: vagrant 00:01:44.825 default: SSH auth method: private key 00:01:46.740 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:53.333 ==> default: Mounting SSHFS shared folder... 00:01:55.250 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.250 ==> default: Checking Mount.. 00:01:56.633 ==> default: Folder Successfully Mounted! 00:01:56.633 00:01:56.633 SUCCESS! 00:01:56.633 00:01:56.633 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:56.633 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:56.633 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:56.633 00:01:56.643 [Pipeline] } 00:01:56.659 [Pipeline] // stage 00:01:56.667 [Pipeline] dir 00:01:56.668 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:56.669 [Pipeline] { 00:01:56.681 [Pipeline] catchError 00:01:56.683 [Pipeline] { 00:01:56.694 [Pipeline] sh 00:01:56.978 + vagrant ssh-config --host vagrant 00:01:56.978 + sed -ne '/^Host/,$p' 00:01:56.978 + tee ssh_conf 00:01:59.526 Host vagrant 00:01:59.526 HostName 192.168.121.70 00:01:59.526 User vagrant 00:01:59.526 Port 22 00:01:59.526 UserKnownHostsFile /dev/null 00:01:59.526 StrictHostKeyChecking no 00:01:59.526 PasswordAuthentication no 00:01:59.526 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:59.526 IdentitiesOnly yes 00:01:59.526 LogLevel FATAL 00:01:59.526 ForwardAgent yes 00:01:59.526 ForwardX11 yes 00:01:59.526 00:01:59.543 [Pipeline] withEnv 00:01:59.546 [Pipeline] { 00:01:59.562 [Pipeline] sh 00:01:59.848 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:59.848 source /etc/os-release 00:01:59.848 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.848 # Minimal, systemd-like check. 00:01:59.848 if [[ -e /.dockerenv ]]; then 00:01:59.848 # Clear garbage from the node'\''s name: 00:01:59.848 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.848 # $HOSTNAME is the actual container id 00:01:59.848 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.848 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.848 # We can assume this is a mount from a host where container is running, 00:01:59.848 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.848 container="$(< /etc/hostname) ($agent)" 00:01:59.848 else 00:01:59.848 # Fallback 00:01:59.848 container=$agent 00:01:59.848 fi 00:01:59.848 fi 00:01:59.848 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.848 ' 00:02:00.121 [Pipeline] } 00:02:00.137 [Pipeline] // withEnv 00:02:00.146 [Pipeline] setCustomBuildProperty 00:02:00.161 [Pipeline] stage 00:02:00.163 [Pipeline] { (Tests) 00:02:00.181 [Pipeline] sh 00:02:00.470 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:00.746 [Pipeline] sh 00:02:01.032 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:01.310 [Pipeline] timeout 00:02:01.310 Timeout set to expire in 50 min 00:02:01.312 [Pipeline] { 00:02:01.326 [Pipeline] sh 00:02:01.653 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:02.225 HEAD is now at d47eb51c9 bdev: fix a race between reset start and complete 00:02:02.238 [Pipeline] sh 00:02:02.524 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:02.801 [Pipeline] sh 00:02:03.086 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:03.369 [Pipeline] sh 00:02:03.654 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:03.916 ++ readlink -f spdk_repo 00:02:03.916 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.916 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.916 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.916 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.916 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.916 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.916 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.916 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:03.916 + cd /home/vagrant/spdk_repo 00:02:03.916 + source /etc/os-release 00:02:03.916 ++ NAME='Fedora Linux' 00:02:03.916 ++ VERSION='39 (Cloud Edition)' 00:02:03.916 ++ ID=fedora 00:02:03.916 ++ VERSION_ID=39 00:02:03.916 ++ VERSION_CODENAME= 00:02:03.916 ++ PLATFORM_ID=platform:f39 00:02:03.916 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:03.916 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.916 ++ LOGO=fedora-logo-icon 00:02:03.916 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:03.916 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.916 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:03.916 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.916 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.916 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.916 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:03.916 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.916 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:03.916 ++ SUPPORT_END=2024-11-12 00:02:03.916 ++ VARIANT='Cloud Edition' 00:02:03.916 ++ VARIANT_ID=cloud 00:02:03.916 + uname -a 00:02:03.916 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:03.916 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:04.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:04.440 Hugepages 00:02:04.440 node hugesize free / total 00:02:04.440 node0 1048576kB 0 / 0 00:02:04.701 node0 2048kB 0 / 0 00:02:04.701 00:02:04.701 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:04.701 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:04.701 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:04.701 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:04.701 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:04.701 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:04.701 + rm -f /tmp/spdk-ld-path 00:02:04.701 + source autorun-spdk.conf 00:02:04.701 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.701 ++ SPDK_TEST_NVME=1 00:02:04.701 ++ SPDK_TEST_FTL=1 00:02:04.701 ++ SPDK_TEST_ISAL=1 00:02:04.701 ++ SPDK_RUN_ASAN=1 00:02:04.701 ++ SPDK_RUN_UBSAN=1 00:02:04.701 ++ SPDK_TEST_XNVME=1 00:02:04.701 ++ SPDK_TEST_NVME_FDP=1 00:02:04.701 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.701 ++ RUN_NIGHTLY=1 00:02:04.701 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:04.701 + [[ -n '' ]] 00:02:04.701 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:04.701 + for M in /var/spdk/build-*-manifest.txt 00:02:04.701 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:04.701 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.701 + for M in /var/spdk/build-*-manifest.txt 00:02:04.701 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:04.701 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.701 + for M in /var/spdk/build-*-manifest.txt 00:02:04.701 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:04.701 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.701 ++ uname 00:02:04.701 + [[ Linux == \L\i\n\u\x ]] 00:02:04.701 + sudo dmesg -T 00:02:04.701 + sudo dmesg --clear 00:02:04.701 + dmesg_pid=5026 00:02:04.701 + [[ Fedora Linux == FreeBSD ]] 00:02:04.701 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.701 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.701 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:04.701 + [[ -x /usr/src/fio-static/fio ]] 00:02:04.701 + sudo dmesg -Tw 00:02:04.701 + export FIO_BIN=/usr/src/fio-static/fio 00:02:04.701 + FIO_BIN=/usr/src/fio-static/fio 00:02:04.701 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:04.701 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:04.701 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:04.702 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.702 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.702 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:04.702 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.702 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.702 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.963 06:25:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:04.963 06:25:56 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.963 06:25:56 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:04.963 06:25:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:04.963 06:25:56 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.963 06:25:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:04.963 06:25:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:04.963 06:25:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:04.963 06:25:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:04.963 06:25:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:04.963 06:25:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:04.963 06:25:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.963 06:25:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.963 06:25:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.963 06:25:56 -- paths/export.sh@5 -- $ export PATH 00:02:04.963 06:25:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.963 06:25:56 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:04.963 06:25:56 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:04.963 06:25:56 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731997556.XXXXXX 00:02:04.963 06:25:56 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731997556.v51fvq 00:02:04.963 06:25:56 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:04.963 06:25:56 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:04.963 06:25:56 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:04.963 06:25:56 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:04.963 06:25:56 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:04.963 06:25:56 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:04.963 06:25:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:04.963 06:25:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.963 06:25:56 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:04.963 06:25:56 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:04.963 06:25:56 -- pm/common@17 -- $ local monitor 00:02:04.963 06:25:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.963 06:25:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.963 06:25:56 -- pm/common@25 -- $ sleep 1 00:02:04.963 06:25:56 -- pm/common@21 -- $ date +%s 00:02:04.963 06:25:56 -- pm/common@21 -- $ date +%s 00:02:04.963 06:25:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731997556 00:02:04.963 06:25:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731997556 00:02:04.963 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731997556_collect-cpu-load.pm.log 00:02:04.964 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731997556_collect-vmstat.pm.log 00:02:05.908 06:25:57 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:05.908 06:25:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:05.908 06:25:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:05.908 06:25:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:05.908 06:25:57 -- spdk/autobuild.sh@16 -- $ date -u 00:02:05.908 Tue Nov 19 06:25:57 AM UTC 2024 00:02:05.908 06:25:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:05.908 v25.01-pre-190-gd47eb51c9 00:02:05.908 06:25:57 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:05.908 06:25:57 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:05.908 06:25:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:05.908 06:25:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:05.908 06:25:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.908 ************************************ 00:02:05.908 START TEST asan 00:02:05.908 ************************************ 00:02:05.908 using asan 00:02:05.908 06:25:57 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:05.908 00:02:05.908 real 0m0.000s 00:02:05.908 user 0m0.000s 00:02:05.908 sys 0m0.000s 00:02:05.908 06:25:57 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:05.908 06:25:57 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:05.908 ************************************ 00:02:05.908 END TEST asan 00:02:05.908 ************************************ 00:02:06.170 06:25:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.170 06:25:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.170 06:25:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:06.170 06:25:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:06.170 06:25:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.170 ************************************ 00:02:06.170 START TEST ubsan 00:02:06.170 ************************************ 00:02:06.170 using ubsan 00:02:06.170 06:25:57 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:06.170 00:02:06.170 real 0m0.000s 00:02:06.170 user 0m0.000s 00:02:06.170 sys 0m0.000s 00:02:06.170 06:25:57 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:06.170 ************************************ 00:02:06.170 END TEST ubsan 00:02:06.170 ************************************ 00:02:06.170 06:25:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.170 06:25:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:06.170 06:25:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:06.170 06:25:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:06.170 06:25:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:06.170 06:25:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:06.170 06:25:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:06.170 06:25:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:06.170 06:25:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:06.170 06:25:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:06.170 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:06.170 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.743 Using 'verbs' RDMA provider 00:02:20.367 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:30.376 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:30.636 Creating mk/config.mk...done. 00:02:30.636 Creating mk/cc.flags.mk...done. 00:02:30.636 Type 'make' to build. 00:02:30.636 06:26:22 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:30.636 06:26:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.636 06:26:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.636 06:26:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.636 ************************************ 00:02:30.636 START TEST make 00:02:30.636 ************************************ 00:02:30.636 06:26:22 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:30.896 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:30.896 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:30.896 meson setup builddir \ 00:02:30.896 -Dwith-libaio=enabled \ 00:02:30.896 -Dwith-liburing=enabled \ 00:02:30.896 -Dwith-libvfn=disabled \ 00:02:30.896 -Dwith-spdk=disabled \ 00:02:30.896 -Dexamples=false \ 00:02:30.896 -Dtests=false \ 00:02:30.896 -Dtools=false && \ 00:02:30.896 meson compile -C builddir && \ 00:02:30.896 cd -) 00:02:30.896 make[1]: Nothing to be done for 'all'. 00:02:33.443 The Meson build system 00:02:33.443 Version: 1.5.0 00:02:33.443 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:33.443 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:33.443 Build type: native build 00:02:33.443 Project name: xnvme 00:02:33.443 Project version: 0.7.5 00:02:33.443 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.443 C linker for the host machine: cc ld.bfd 2.40-14 00:02:33.443 Host machine cpu family: x86_64 00:02:33.443 Host machine cpu: x86_64 00:02:33.443 Message: host_machine.system: linux 00:02:33.443 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:33.443 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:33.443 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:33.443 Run-time dependency threads found: YES 00:02:33.443 Has header "setupapi.h" : NO 00:02:33.443 Has header "linux/blkzoned.h" : YES 00:02:33.443 Has header "linux/blkzoned.h" : YES (cached) 00:02:33.443 Has header "libaio.h" : YES 00:02:33.443 Library aio found: YES 00:02:33.443 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.443 Run-time dependency liburing found: YES 2.2 00:02:33.443 Dependency libvfn skipped: feature with-libvfn disabled 00:02:33.443 Found CMake: /usr/bin/cmake (3.27.7) 00:02:33.443 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:33.443 Subproject spdk : skipped: feature with-spdk disabled 00:02:33.443 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.443 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.443 Library rt found: YES 00:02:33.443 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:33.443 Configuring xnvme_config.h using configuration 00:02:33.443 Configuring xnvme.spec using configuration 00:02:33.443 Run-time dependency bash-completion found: YES 2.11 00:02:33.443 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:33.443 Program cp found: YES (/usr/bin/cp) 00:02:33.443 Build targets in project: 3 00:02:33.443 00:02:33.443 xnvme 0.7.5 00:02:33.443 00:02:33.443 Subprojects 00:02:33.443 spdk : NO Feature 'with-spdk' disabled 00:02:33.443 00:02:33.443 User defined options 00:02:33.443 examples : false 00:02:33.443 tests : false 00:02:33.443 tools : false 00:02:33.443 with-libaio : enabled 00:02:33.443 with-liburing: enabled 00:02:33.443 with-libvfn : disabled 00:02:33.443 with-spdk : disabled 00:02:33.443 00:02:33.443 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.016 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:34.016 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:34.016 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:34.016 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:34.016 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:34.016 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:34.017 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:34.017 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:34.017 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:34.017 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:34.017 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:34.017 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:34.017 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:34.017 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:34.017 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:34.017 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:34.017 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:34.017 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:34.017 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:34.017 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:34.017 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:34.017 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:34.017 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:34.017 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:34.017 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:34.017 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:34.017 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:34.278 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:34.278 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:34.278 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:34.278 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:34.278 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:34.278 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:34.278 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:34.278 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:34.278 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:34.278 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:34.278 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:34.278 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:34.278 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:34.278 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:34.278 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:34.278 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:34.278 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:34.278 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:34.278 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:34.278 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:34.278 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:34.278 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:34.278 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:34.278 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:34.278 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:34.278 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:34.278 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:34.278 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:34.278 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:34.278 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:34.278 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:34.278 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:34.278 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:34.278 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:34.540 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:34.540 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:34.540 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:34.540 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:34.540 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:34.540 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:34.540 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:34.540 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:34.540 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:34.540 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:34.540 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:34.540 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:34.540 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:34.800 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:35.060 [75/76] Linking static target lib/libxnvme.a 00:02:35.060 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:35.060 INFO: autodetecting backend as ninja 00:02:35.060 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:35.060 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:41.699 The Meson build system 00:02:41.699 Version: 1.5.0 00:02:41.699 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:41.699 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:41.699 Build type: native build 00:02:41.699 Program cat found: YES (/usr/bin/cat) 00:02:41.699 Project name: DPDK 00:02:41.699 Project version: 24.03.0 00:02:41.699 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.699 C linker for the host machine: cc ld.bfd 2.40-14 00:02:41.699 Host machine cpu family: x86_64 00:02:41.699 Host machine cpu: x86_64 00:02:41.699 Message: ## Building in Developer Mode ## 00:02:41.699 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.699 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:41.699 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.699 Program python3 found: YES (/usr/bin/python3) 00:02:41.699 Program cat found: YES (/usr/bin/cat) 00:02:41.699 Compiler for C supports arguments -march=native: YES 00:02:41.699 Checking for size of "void *" : 8 00:02:41.699 Checking for size of "void *" : 8 (cached) 00:02:41.699 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:41.699 Library m found: YES 00:02:41.699 Library numa found: YES 00:02:41.699 Has header "numaif.h" : YES 00:02:41.699 Library fdt found: NO 00:02:41.699 Library execinfo found: NO 00:02:41.699 Has header "execinfo.h" : YES 00:02:41.699 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.699 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.699 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.699 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.699 Run-time dependency openssl found: YES 3.1.1 00:02:41.699 Run-time dependency libpcap found: YES 1.10.4 00:02:41.699 Has header "pcap.h" with dependency libpcap: YES 00:02:41.699 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.699 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.699 Compiler for C supports arguments -Wformat: YES 00:02:41.699 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:41.699 Compiler for C supports arguments -Wformat-security: NO 00:02:41.699 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.699 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.699 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.699 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.699 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.699 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.699 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.699 Compiler for C supports arguments -Wundef: YES 00:02:41.699 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.699 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.699 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.699 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.699 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:41.699 Program objdump found: YES (/usr/bin/objdump) 00:02:41.699 Compiler for C supports arguments -mavx512f: YES 00:02:41.699 Checking if "AVX512 checking" compiles: YES 00:02:41.699 Fetching value of define "__SSE4_2__" : 1 00:02:41.699 Fetching value of define "__AES__" : 1 00:02:41.699 Fetching value of define "__AVX__" : 1 00:02:41.699 Fetching value of define "__AVX2__" : 1 00:02:41.699 Fetching value of define "__AVX512BW__" : 1 00:02:41.699 Fetching value of define "__AVX512CD__" : 1 00:02:41.699 Fetching value of define "__AVX512DQ__" : 1 00:02:41.699 Fetching value of define "__AVX512F__" : 1 00:02:41.699 Fetching value of define "__AVX512VL__" : 1 00:02:41.699 Fetching value of define "__PCLMUL__" : 1 00:02:41.699 Fetching value of define "__RDRND__" : 1 00:02:41.699 Fetching value of define "__RDSEED__" : 1 00:02:41.699 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:41.699 Fetching value of define "__znver1__" : (undefined) 00:02:41.699 Fetching value of define "__znver2__" : (undefined) 00:02:41.699 Fetching value of define "__znver3__" : (undefined) 00:02:41.700 Fetching value of define "__znver4__" : (undefined) 00:02:41.700 Library asan found: YES 00:02:41.700 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.700 Message: lib/log: Defining dependency "log" 00:02:41.700 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.700 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.700 Library rt found: YES 00:02:41.700 Checking for function "getentropy" : NO 00:02:41.700 Message: lib/eal: Defining dependency "eal" 00:02:41.700 Message: lib/ring: Defining dependency "ring" 00:02:41.700 Message: lib/rcu: Defining dependency "rcu" 00:02:41.700 Message: lib/mempool: Defining dependency "mempool" 00:02:41.700 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.700 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.700 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:41.700 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:41.700 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:41.700 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:41.700 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:41.700 Compiler for C supports arguments -mpclmul: YES 00:02:41.700 Compiler for C supports arguments -maes: YES 00:02:41.700 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.700 Compiler for C supports arguments -mavx512bw: YES 00:02:41.700 Compiler for C supports arguments -mavx512dq: YES 00:02:41.700 Compiler for C supports arguments -mavx512vl: YES 00:02:41.700 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.700 Compiler for C supports arguments -mavx2: YES 00:02:41.700 Compiler for C supports arguments -mavx: YES 00:02:41.700 Message: lib/net: Defining dependency "net" 00:02:41.700 Message: lib/meter: Defining dependency "meter" 00:02:41.700 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.700 Message: lib/pci: Defining dependency "pci" 00:02:41.700 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.700 Message: lib/hash: Defining dependency "hash" 00:02:41.700 Message: lib/timer: Defining dependency "timer" 00:02:41.700 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.700 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.700 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.700 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.700 Message: lib/power: Defining dependency "power" 00:02:41.700 Message: lib/reorder: Defining dependency "reorder" 00:02:41.700 Message: lib/security: Defining dependency "security" 00:02:41.700 Has header "linux/userfaultfd.h" : YES 00:02:41.700 Has header "linux/vduse.h" : YES 00:02:41.700 Message: lib/vhost: Defining dependency "vhost" 00:02:41.700 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.700 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.700 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.700 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.700 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:41.700 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:41.700 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:41.700 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:41.700 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:41.700 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:41.700 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.700 Configuring doxy-api-html.conf using configuration 00:02:41.700 Configuring doxy-api-man.conf using configuration 00:02:41.700 Program mandb found: YES (/usr/bin/mandb) 00:02:41.700 Program sphinx-build found: NO 00:02:41.700 Configuring rte_build_config.h using configuration 00:02:41.700 Message: 00:02:41.700 ================= 00:02:41.700 Applications Enabled 00:02:41.700 ================= 00:02:41.700 00:02:41.700 apps: 00:02:41.700 00:02:41.700 00:02:41.700 Message: 00:02:41.700 ================= 00:02:41.700 Libraries Enabled 00:02:41.700 ================= 00:02:41.700 00:02:41.700 libs: 00:02:41.700 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.700 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:41.700 cryptodev, dmadev, power, reorder, security, vhost, 00:02:41.700 00:02:41.700 Message: 00:02:41.700 =============== 00:02:41.700 Drivers Enabled 00:02:41.700 =============== 00:02:41.700 00:02:41.700 common: 00:02:41.700 00:02:41.700 bus: 00:02:41.700 pci, vdev, 00:02:41.700 mempool: 00:02:41.700 ring, 00:02:41.700 dma: 00:02:41.700 00:02:41.700 net: 00:02:41.700 00:02:41.700 crypto: 00:02:41.700 00:02:41.700 compress: 00:02:41.700 00:02:41.700 vdpa: 00:02:41.700 00:02:41.700 00:02:41.700 Message: 00:02:41.700 ================= 00:02:41.700 Content Skipped 00:02:41.700 ================= 00:02:41.700 00:02:41.700 apps: 00:02:41.700 dumpcap: explicitly disabled via build config 00:02:41.700 graph: explicitly disabled via build config 00:02:41.700 pdump: explicitly disabled via build config 00:02:41.700 proc-info: explicitly disabled via build config 00:02:41.700 test-acl: explicitly disabled via build config 00:02:41.700 test-bbdev: explicitly disabled via build config 00:02:41.700 test-cmdline: explicitly disabled via build config 00:02:41.700 test-compress-perf: explicitly disabled via build config 00:02:41.700 test-crypto-perf: explicitly disabled via build config 00:02:41.700 test-dma-perf: explicitly disabled via build config 00:02:41.700 test-eventdev: explicitly disabled via build config 00:02:41.700 test-fib: explicitly disabled via build config 00:02:41.700 test-flow-perf: explicitly disabled via build config 00:02:41.700 test-gpudev: explicitly disabled via build config 00:02:41.700 test-mldev: explicitly disabled via build config 00:02:41.700 test-pipeline: explicitly disabled via build config 00:02:41.700 test-pmd: explicitly disabled via build config 00:02:41.700 test-regex: explicitly disabled via build config 00:02:41.700 test-sad: explicitly disabled via build config 00:02:41.700 test-security-perf: explicitly disabled via build config 00:02:41.700 00:02:41.700 libs: 00:02:41.700 argparse: explicitly disabled via build config 00:02:41.700 metrics: explicitly disabled via build config 00:02:41.700 acl: explicitly disabled via build config 00:02:41.700 bbdev: explicitly disabled via build config 00:02:41.700 bitratestats: explicitly disabled via build config 00:02:41.700 bpf: explicitly disabled via build config 00:02:41.700 cfgfile: explicitly disabled via build config 00:02:41.700 distributor: explicitly disabled via build config 00:02:41.700 efd: explicitly disabled via build config 00:02:41.700 eventdev: explicitly disabled via build config 00:02:41.700 dispatcher: explicitly disabled via build config 00:02:41.700 gpudev: explicitly disabled via build config 00:02:41.700 gro: explicitly disabled via build config 00:02:41.700 gso: explicitly disabled via build config 00:02:41.700 ip_frag: explicitly disabled via build config 00:02:41.700 jobstats: explicitly disabled via build config 00:02:41.700 latencystats: explicitly disabled via build config 00:02:41.700 lpm: explicitly disabled via build config 00:02:41.700 member: explicitly disabled via build config 00:02:41.700 pcapng: explicitly disabled via build config 00:02:41.700 rawdev: explicitly disabled via build config 00:02:41.700 regexdev: explicitly disabled via build config 00:02:41.700 mldev: explicitly disabled via build config 00:02:41.700 rib: explicitly disabled via build config 00:02:41.700 sched: explicitly disabled via build config 00:02:41.700 stack: explicitly disabled via build config 00:02:41.700 ipsec: explicitly disabled via build config 00:02:41.700 pdcp: explicitly disabled via build config 00:02:41.700 fib: explicitly disabled via build config 00:02:41.700 port: explicitly disabled via build config 00:02:41.700 pdump: explicitly disabled via build config 00:02:41.700 table: explicitly disabled via build config 00:02:41.700 pipeline: explicitly disabled via build config 00:02:41.700 graph: explicitly disabled via build config 00:02:41.700 node: explicitly disabled via build config 00:02:41.700 00:02:41.700 drivers: 00:02:41.700 common/cpt: not in enabled drivers build config 00:02:41.700 common/dpaax: not in enabled drivers build config 00:02:41.700 common/iavf: not in enabled drivers build config 00:02:41.700 common/idpf: not in enabled drivers build config 00:02:41.700 common/ionic: not in enabled drivers build config 00:02:41.700 common/mvep: not in enabled drivers build config 00:02:41.700 common/octeontx: not in enabled drivers build config 00:02:41.700 bus/auxiliary: not in enabled drivers build config 00:02:41.700 bus/cdx: not in enabled drivers build config 00:02:41.700 bus/dpaa: not in enabled drivers build config 00:02:41.700 bus/fslmc: not in enabled drivers build config 00:02:41.700 bus/ifpga: not in enabled drivers build config 00:02:41.700 bus/platform: not in enabled drivers build config 00:02:41.700 bus/uacce: not in enabled drivers build config 00:02:41.700 bus/vmbus: not in enabled drivers build config 00:02:41.700 common/cnxk: not in enabled drivers build config 00:02:41.700 common/mlx5: not in enabled drivers build config 00:02:41.700 common/nfp: not in enabled drivers build config 00:02:41.700 common/nitrox: not in enabled drivers build config 00:02:41.700 common/qat: not in enabled drivers build config 00:02:41.700 common/sfc_efx: not in enabled drivers build config 00:02:41.700 mempool/bucket: not in enabled drivers build config 00:02:41.700 mempool/cnxk: not in enabled drivers build config 00:02:41.700 mempool/dpaa: not in enabled drivers build config 00:02:41.700 mempool/dpaa2: not in enabled drivers build config 00:02:41.700 mempool/octeontx: not in enabled drivers build config 00:02:41.700 mempool/stack: not in enabled drivers build config 00:02:41.700 dma/cnxk: not in enabled drivers build config 00:02:41.700 dma/dpaa: not in enabled drivers build config 00:02:41.700 dma/dpaa2: not in enabled drivers build config 00:02:41.700 dma/hisilicon: not in enabled drivers build config 00:02:41.700 dma/idxd: not in enabled drivers build config 00:02:41.700 dma/ioat: not in enabled drivers build config 00:02:41.700 dma/skeleton: not in enabled drivers build config 00:02:41.700 net/af_packet: not in enabled drivers build config 00:02:41.701 net/af_xdp: not in enabled drivers build config 00:02:41.701 net/ark: not in enabled drivers build config 00:02:41.701 net/atlantic: not in enabled drivers build config 00:02:41.701 net/avp: not in enabled drivers build config 00:02:41.701 net/axgbe: not in enabled drivers build config 00:02:41.701 net/bnx2x: not in enabled drivers build config 00:02:41.701 net/bnxt: not in enabled drivers build config 00:02:41.701 net/bonding: not in enabled drivers build config 00:02:41.701 net/cnxk: not in enabled drivers build config 00:02:41.701 net/cpfl: not in enabled drivers build config 00:02:41.701 net/cxgbe: not in enabled drivers build config 00:02:41.701 net/dpaa: not in enabled drivers build config 00:02:41.701 net/dpaa2: not in enabled drivers build config 00:02:41.701 net/e1000: not in enabled drivers build config 00:02:41.701 net/ena: not in enabled drivers build config 00:02:41.701 net/enetc: not in enabled drivers build config 00:02:41.701 net/enetfec: not in enabled drivers build config 00:02:41.701 net/enic: not in enabled drivers build config 00:02:41.701 net/failsafe: not in enabled drivers build config 00:02:41.701 net/fm10k: not in enabled drivers build config 00:02:41.701 net/gve: not in enabled drivers build config 00:02:41.701 net/hinic: not in enabled drivers build config 00:02:41.701 net/hns3: not in enabled drivers build config 00:02:41.701 net/i40e: not in enabled drivers build config 00:02:41.701 net/iavf: not in enabled drivers build config 00:02:41.701 net/ice: not in enabled drivers build config 00:02:41.701 net/idpf: not in enabled drivers build config 00:02:41.701 net/igc: not in enabled drivers build config 00:02:41.701 net/ionic: not in enabled drivers build config 00:02:41.701 net/ipn3ke: not in enabled drivers build config 00:02:41.701 net/ixgbe: not in enabled drivers build config 00:02:41.701 net/mana: not in enabled drivers build config 00:02:41.701 net/memif: not in enabled drivers build config 00:02:41.701 net/mlx4: not in enabled drivers build config 00:02:41.701 net/mlx5: not in enabled drivers build config 00:02:41.701 net/mvneta: not in enabled drivers build config 00:02:41.701 net/mvpp2: not in enabled drivers build config 00:02:41.701 net/netvsc: not in enabled drivers build config 00:02:41.701 net/nfb: not in enabled drivers build config 00:02:41.701 net/nfp: not in enabled drivers build config 00:02:41.701 net/ngbe: not in enabled drivers build config 00:02:41.701 net/null: not in enabled drivers build config 00:02:41.701 net/octeontx: not in enabled drivers build config 00:02:41.701 net/octeon_ep: not in enabled drivers build config 00:02:41.701 net/pcap: not in enabled drivers build config 00:02:41.701 net/pfe: not in enabled drivers build config 00:02:41.701 net/qede: not in enabled drivers build config 00:02:41.701 net/ring: not in enabled drivers build config 00:02:41.701 net/sfc: not in enabled drivers build config 00:02:41.701 net/softnic: not in enabled drivers build config 00:02:41.701 net/tap: not in enabled drivers build config 00:02:41.701 net/thunderx: not in enabled drivers build config 00:02:41.701 net/txgbe: not in enabled drivers build config 00:02:41.701 net/vdev_netvsc: not in enabled drivers build config 00:02:41.701 net/vhost: not in enabled drivers build config 00:02:41.701 net/virtio: not in enabled drivers build config 00:02:41.701 net/vmxnet3: not in enabled drivers build config 00:02:41.701 raw/*: missing internal dependency, "rawdev" 00:02:41.701 crypto/armv8: not in enabled drivers build config 00:02:41.701 crypto/bcmfs: not in enabled drivers build config 00:02:41.701 crypto/caam_jr: not in enabled drivers build config 00:02:41.701 crypto/ccp: not in enabled drivers build config 00:02:41.701 crypto/cnxk: not in enabled drivers build config 00:02:41.701 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.701 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.701 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.701 crypto/mlx5: not in enabled drivers build config 00:02:41.701 crypto/mvsam: not in enabled drivers build config 00:02:41.701 crypto/nitrox: not in enabled drivers build config 00:02:41.701 crypto/null: not in enabled drivers build config 00:02:41.701 crypto/octeontx: not in enabled drivers build config 00:02:41.701 crypto/openssl: not in enabled drivers build config 00:02:41.701 crypto/scheduler: not in enabled drivers build config 00:02:41.701 crypto/uadk: not in enabled drivers build config 00:02:41.701 crypto/virtio: not in enabled drivers build config 00:02:41.701 compress/isal: not in enabled drivers build config 00:02:41.701 compress/mlx5: not in enabled drivers build config 00:02:41.701 compress/nitrox: not in enabled drivers build config 00:02:41.701 compress/octeontx: not in enabled drivers build config 00:02:41.701 compress/zlib: not in enabled drivers build config 00:02:41.701 regex/*: missing internal dependency, "regexdev" 00:02:41.701 ml/*: missing internal dependency, "mldev" 00:02:41.701 vdpa/ifc: not in enabled drivers build config 00:02:41.701 vdpa/mlx5: not in enabled drivers build config 00:02:41.701 vdpa/nfp: not in enabled drivers build config 00:02:41.701 vdpa/sfc: not in enabled drivers build config 00:02:41.701 event/*: missing internal dependency, "eventdev" 00:02:41.701 baseband/*: missing internal dependency, "bbdev" 00:02:41.701 gpu/*: missing internal dependency, "gpudev" 00:02:41.701 00:02:41.701 00:02:41.701 Build targets in project: 84 00:02:41.701 00:02:41.701 DPDK 24.03.0 00:02:41.701 00:02:41.701 User defined options 00:02:41.701 buildtype : debug 00:02:41.701 default_library : shared 00:02:41.701 libdir : lib 00:02:41.701 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.701 b_sanitize : address 00:02:41.701 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:41.701 c_link_args : 00:02:41.701 cpu_instruction_set: native 00:02:41.701 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:41.701 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:41.701 enable_docs : false 00:02:41.701 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:41.701 enable_kmods : false 00:02:41.701 max_lcores : 128 00:02:41.701 tests : false 00:02:41.701 00:02:41.701 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.962 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:42.221 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:42.221 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:42.221 [3/267] Linking static target lib/librte_kvargs.a 00:02:42.221 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.221 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:42.221 [6/267] Linking static target lib/librte_log.a 00:02:42.479 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.479 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.479 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.479 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.479 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.479 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.479 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.479 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.479 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.479 [16/267] Linking static target lib/librte_telemetry.a 00:02:42.479 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.738 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.997 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.997 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.997 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.997 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.997 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.997 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:42.997 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.997 [26/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.255 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.255 [28/267] Linking target lib/librte_log.so.24.1 00:02:43.255 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.255 [30/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.255 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:43.255 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:43.255 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.514 [34/267] Linking target lib/librte_kvargs.so.24.1 00:02:43.514 [35/267] Linking target lib/librte_telemetry.so.24.1 00:02:43.514 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:43.514 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:43.514 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:43.514 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:43.514 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:43.514 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:43.514 [42/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:43.514 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.514 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:43.514 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.514 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:43.772 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:43.772 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:43.772 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.772 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.031 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.031 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.031 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.031 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.031 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.031 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.031 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.289 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:44.289 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.289 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.289 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.289 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.289 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.547 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.547 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.547 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.547 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.547 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.547 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.806 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.806 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.806 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:44.806 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.806 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.806 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.806 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.064 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.064 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.064 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.064 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.064 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.064 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.322 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.322 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.322 [85/267] Linking static target lib/librte_ring.a 00:02:45.581 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.581 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:45.581 [88/267] Linking static target lib/librte_eal.a 00:02:45.581 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:45.581 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:45.581 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:45.581 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.840 [93/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:45.840 [94/267] Linking static target lib/librte_rcu.a 00:02:45.840 [95/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.840 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:45.840 [97/267] Linking static target lib/librte_mempool.a 00:02:45.840 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:46.098 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.098 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.098 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.098 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.098 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.357 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:46.357 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.357 [106/267] Linking static target lib/librte_meter.a 00:02:46.357 [107/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.357 [108/267] Linking static target lib/librte_net.a 00:02:46.357 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:46.615 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:46.615 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.615 [112/267] Linking static target lib/librte_mbuf.a 00:02:46.615 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:46.615 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:46.615 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.874 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.874 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.874 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.132 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.132 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:47.132 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.391 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.391 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.391 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.391 [125/267] Linking static target lib/librte_pci.a 00:02:47.391 [126/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.391 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:47.391 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.649 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:47.649 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:47.649 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:47.649 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:47.649 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:47.649 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:47.649 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:47.649 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:47.649 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:47.649 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.907 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:47.907 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.907 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:47.907 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:47.907 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:47.907 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:47.907 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:48.165 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.165 [147/267] Linking static target lib/librte_cmdline.a 00:02:48.165 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:48.165 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.165 [150/267] Linking static target lib/librte_timer.a 00:02:48.165 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:48.424 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.424 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:48.424 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.683 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:48.683 [156/267] Linking static target lib/librte_ethdev.a 00:02:48.683 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.683 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:48.683 [159/267] Linking static target lib/librte_compressdev.a 00:02:48.683 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.683 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.683 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:48.941 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:48.941 [164/267] Linking static target lib/librte_hash.a 00:02:48.941 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:48.941 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.941 [167/267] Linking static target lib/librte_dmadev.a 00:02:48.941 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.200 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:49.200 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.200 [171/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.459 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.459 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.459 [174/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.459 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.459 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.718 [177/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.718 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.718 [179/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.718 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.718 [181/267] Linking static target lib/librte_cryptodev.a 00:02:49.718 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.718 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.718 [184/267] Linking static target lib/librte_power.a 00:02:49.718 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.976 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.976 [187/267] Linking static target lib/librte_reorder.a 00:02:49.976 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.976 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:49.977 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:49.977 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:49.977 [192/267] Linking static target lib/librte_security.a 00:02:50.236 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.494 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.753 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.753 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:50.753 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.753 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.753 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.013 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.013 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.013 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.013 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.013 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.274 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.274 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.274 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:51.274 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.274 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.532 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.532 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.532 [212/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.532 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.532 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.532 [215/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.532 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.532 [217/267] Linking static target drivers/librte_bus_vdev.a 00:02:51.532 [218/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.532 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.532 [220/267] Linking static target drivers/librte_bus_pci.a 00:02:51.789 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.789 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.790 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.790 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:51.790 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.047 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.612 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.178 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.178 [229/267] Linking target lib/librte_eal.so.24.1 00:02:53.437 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.437 [231/267] Linking target lib/librte_ring.so.24.1 00:02:53.437 [232/267] Linking target lib/librte_pci.so.24.1 00:02:53.437 [233/267] Linking target lib/librte_meter.so.24.1 00:02:53.437 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:53.437 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.437 [236/267] Linking target lib/librte_timer.so.24.1 00:02:53.437 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.437 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.437 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.437 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.696 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.696 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:53.696 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:53.696 [244/267] Linking target lib/librte_rcu.so.24.1 00:02:53.696 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.696 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.696 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.696 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:53.955 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:53.955 [250/267] Linking target lib/librte_reorder.so.24.1 00:02:53.955 [251/267] Linking target lib/librte_net.so.24.1 00:02:53.955 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:53.955 [253/267] Linking target lib/librte_compressdev.so.24.1 00:02:53.955 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.955 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.955 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:53.955 [257/267] Linking target lib/librte_hash.so.24.1 00:02:53.955 [258/267] Linking target lib/librte_security.so.24.1 00:02:54.214 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:54.214 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.214 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:54.214 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:54.472 [263/267] Linking target lib/librte_power.so.24.1 00:02:55.848 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:55.848 [265/267] Linking static target lib/librte_vhost.a 00:02:57.225 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.225 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:57.225 INFO: autodetecting backend as ninja 00:02:57.225 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:15.302 CC lib/log/log.o 00:03:15.302 CC lib/log/log_flags.o 00:03:15.302 CC lib/log/log_deprecated.o 00:03:15.302 CC lib/ut_mock/mock.o 00:03:15.302 CC lib/ut/ut.o 00:03:15.302 LIB libspdk_ut_mock.a 00:03:15.302 LIB libspdk_log.a 00:03:15.302 LIB libspdk_ut.a 00:03:15.302 SO libspdk_ut_mock.so.6.0 00:03:15.302 SO libspdk_log.so.7.1 00:03:15.302 SO libspdk_ut.so.2.0 00:03:15.302 SYMLINK libspdk_ut_mock.so 00:03:15.302 SYMLINK libspdk_log.so 00:03:15.302 SYMLINK libspdk_ut.so 00:03:15.302 CC lib/util/base64.o 00:03:15.302 CC lib/util/bit_array.o 00:03:15.302 CXX lib/trace_parser/trace.o 00:03:15.302 CC lib/util/cpuset.o 00:03:15.302 CC lib/util/crc32.o 00:03:15.302 CC lib/util/crc32c.o 00:03:15.302 CC lib/util/crc16.o 00:03:15.302 CC lib/ioat/ioat.o 00:03:15.302 CC lib/dma/dma.o 00:03:15.302 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.302 CC lib/util/crc32_ieee.o 00:03:15.302 CC lib/vfio_user/host/vfio_user.o 00:03:15.302 CC lib/util/crc64.o 00:03:15.302 CC lib/util/dif.o 00:03:15.302 LIB libspdk_dma.a 00:03:15.302 CC lib/util/fd.o 00:03:15.302 CC lib/util/fd_group.o 00:03:15.302 SO libspdk_dma.so.5.0 00:03:15.302 CC lib/util/file.o 00:03:15.302 CC lib/util/hexlify.o 00:03:15.302 CC lib/util/iov.o 00:03:15.302 SYMLINK libspdk_dma.so 00:03:15.302 CC lib/util/math.o 00:03:15.302 LIB libspdk_ioat.a 00:03:15.302 CC lib/util/net.o 00:03:15.302 SO libspdk_ioat.so.7.0 00:03:15.302 LIB libspdk_vfio_user.a 00:03:15.302 SO libspdk_vfio_user.so.5.0 00:03:15.302 SYMLINK libspdk_ioat.so 00:03:15.302 CC lib/util/pipe.o 00:03:15.302 CC lib/util/strerror_tls.o 00:03:15.302 SYMLINK libspdk_vfio_user.so 00:03:15.302 CC lib/util/string.o 00:03:15.302 CC lib/util/uuid.o 00:03:15.302 CC lib/util/xor.o 00:03:15.302 CC lib/util/zipf.o 00:03:15.302 CC lib/util/md5.o 00:03:15.302 LIB libspdk_util.a 00:03:15.302 SO libspdk_util.so.10.1 00:03:15.302 LIB libspdk_trace_parser.a 00:03:15.302 SYMLINK libspdk_util.so 00:03:15.302 SO libspdk_trace_parser.so.6.0 00:03:15.302 SYMLINK libspdk_trace_parser.so 00:03:15.302 CC lib/conf/conf.o 00:03:15.302 CC lib/json/json_util.o 00:03:15.302 CC lib/json/json_write.o 00:03:15.302 CC lib/vmd/led.o 00:03:15.302 CC lib/vmd/vmd.o 00:03:15.302 CC lib/json/json_parse.o 00:03:15.302 CC lib/rdma_utils/rdma_utils.o 00:03:15.302 CC lib/env_dpdk/memory.o 00:03:15.302 CC lib/env_dpdk/env.o 00:03:15.302 CC lib/idxd/idxd.o 00:03:15.302 CC lib/env_dpdk/pci.o 00:03:15.302 LIB libspdk_conf.a 00:03:15.302 LIB libspdk_rdma_utils.a 00:03:15.302 SO libspdk_rdma_utils.so.1.0 00:03:15.302 CC lib/env_dpdk/init.o 00:03:15.302 SO libspdk_conf.so.6.0 00:03:15.302 CC lib/env_dpdk/threads.o 00:03:15.302 LIB libspdk_json.a 00:03:15.302 SYMLINK libspdk_conf.so 00:03:15.302 SYMLINK libspdk_rdma_utils.so 00:03:15.302 CC lib/env_dpdk/pci_ioat.o 00:03:15.302 CC lib/env_dpdk/pci_virtio.o 00:03:15.302 SO libspdk_json.so.6.0 00:03:15.302 SYMLINK libspdk_json.so 00:03:15.302 CC lib/env_dpdk/pci_vmd.o 00:03:15.302 CC lib/env_dpdk/pci_idxd.o 00:03:15.302 CC lib/env_dpdk/pci_event.o 00:03:15.302 CC lib/idxd/idxd_user.o 00:03:15.302 CC lib/idxd/idxd_kernel.o 00:03:15.302 CC lib/env_dpdk/sigbus_handler.o 00:03:15.302 CC lib/env_dpdk/pci_dpdk.o 00:03:15.302 CC lib/rdma_provider/common.o 00:03:15.302 CC lib/jsonrpc/jsonrpc_server.o 00:03:15.302 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:15.302 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:15.302 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:15.302 LIB libspdk_idxd.a 00:03:15.302 CC lib/jsonrpc/jsonrpc_client.o 00:03:15.302 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:15.302 LIB libspdk_vmd.a 00:03:15.303 SO libspdk_idxd.so.12.1 00:03:15.303 SO libspdk_vmd.so.6.0 00:03:15.303 SYMLINK libspdk_idxd.so 00:03:15.303 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:15.303 SYMLINK libspdk_vmd.so 00:03:15.303 LIB libspdk_rdma_provider.a 00:03:15.303 SO libspdk_rdma_provider.so.7.0 00:03:15.303 LIB libspdk_jsonrpc.a 00:03:15.303 SYMLINK libspdk_rdma_provider.so 00:03:15.303 SO libspdk_jsonrpc.so.6.0 00:03:15.303 SYMLINK libspdk_jsonrpc.so 00:03:15.303 CC lib/rpc/rpc.o 00:03:15.303 LIB libspdk_env_dpdk.a 00:03:15.303 LIB libspdk_rpc.a 00:03:15.303 SO libspdk_rpc.so.6.0 00:03:15.303 SO libspdk_env_dpdk.so.15.1 00:03:15.560 SYMLINK libspdk_rpc.so 00:03:15.560 SYMLINK libspdk_env_dpdk.so 00:03:15.560 CC lib/keyring/keyring.o 00:03:15.560 CC lib/keyring/keyring_rpc.o 00:03:15.560 CC lib/notify/notify.o 00:03:15.560 CC lib/notify/notify_rpc.o 00:03:15.560 CC lib/trace/trace.o 00:03:15.560 CC lib/trace/trace_flags.o 00:03:15.560 CC lib/trace/trace_rpc.o 00:03:15.818 LIB libspdk_notify.a 00:03:15.818 SO libspdk_notify.so.6.0 00:03:15.818 SYMLINK libspdk_notify.so 00:03:15.818 LIB libspdk_keyring.a 00:03:15.818 SO libspdk_keyring.so.2.0 00:03:15.818 LIB libspdk_trace.a 00:03:15.818 SO libspdk_trace.so.11.0 00:03:15.818 SYMLINK libspdk_keyring.so 00:03:15.818 SYMLINK libspdk_trace.so 00:03:16.077 CC lib/thread/iobuf.o 00:03:16.077 CC lib/thread/thread.o 00:03:16.077 CC lib/sock/sock.o 00:03:16.077 CC lib/sock/sock_rpc.o 00:03:16.644 LIB libspdk_sock.a 00:03:16.644 SO libspdk_sock.so.10.0 00:03:16.644 SYMLINK libspdk_sock.so 00:03:16.902 CC lib/nvme/nvme_ctrlr.o 00:03:16.902 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:16.902 CC lib/nvme/nvme_pcie_common.o 00:03:16.902 CC lib/nvme/nvme_ns_cmd.o 00:03:16.902 CC lib/nvme/nvme_fabric.o 00:03:16.902 CC lib/nvme/nvme.o 00:03:16.902 CC lib/nvme/nvme_pcie.o 00:03:16.902 CC lib/nvme/nvme_ns.o 00:03:16.902 CC lib/nvme/nvme_qpair.o 00:03:17.476 CC lib/nvme/nvme_quirks.o 00:03:17.476 CC lib/nvme/nvme_transport.o 00:03:17.476 CC lib/nvme/nvme_discovery.o 00:03:17.476 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:17.476 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:17.734 CC lib/nvme/nvme_tcp.o 00:03:17.734 LIB libspdk_thread.a 00:03:17.734 CC lib/nvme/nvme_opal.o 00:03:17.734 SO libspdk_thread.so.11.0 00:03:17.734 SYMLINK libspdk_thread.so 00:03:17.734 CC lib/nvme/nvme_io_msg.o 00:03:17.993 CC lib/accel/accel.o 00:03:17.993 CC lib/accel/accel_rpc.o 00:03:17.993 CC lib/blob/blobstore.o 00:03:17.993 CC lib/nvme/nvme_poll_group.o 00:03:17.993 CC lib/blob/request.o 00:03:17.993 CC lib/blob/zeroes.o 00:03:18.251 CC lib/blob/blob_bs_dev.o 00:03:18.251 CC lib/accel/accel_sw.o 00:03:18.509 CC lib/nvme/nvme_zns.o 00:03:18.509 CC lib/virtio/virtio.o 00:03:18.509 CC lib/init/json_config.o 00:03:18.509 CC lib/fsdev/fsdev.o 00:03:18.509 CC lib/nvme/nvme_stubs.o 00:03:18.509 CC lib/nvme/nvme_auth.o 00:03:18.767 CC lib/virtio/virtio_vhost_user.o 00:03:18.767 CC lib/init/subsystem.o 00:03:18.767 LIB libspdk_accel.a 00:03:18.767 CC lib/virtio/virtio_vfio_user.o 00:03:18.767 SO libspdk_accel.so.16.0 00:03:18.767 CC lib/virtio/virtio_pci.o 00:03:18.767 CC lib/init/subsystem_rpc.o 00:03:18.767 SYMLINK libspdk_accel.so 00:03:18.767 CC lib/fsdev/fsdev_io.o 00:03:19.025 CC lib/fsdev/fsdev_rpc.o 00:03:19.025 CC lib/nvme/nvme_cuse.o 00:03:19.025 CC lib/init/rpc.o 00:03:19.025 CC lib/nvme/nvme_rdma.o 00:03:19.025 LIB libspdk_virtio.a 00:03:19.025 LIB libspdk_init.a 00:03:19.025 SO libspdk_virtio.so.7.0 00:03:19.284 CC lib/bdev/bdev.o 00:03:19.284 CC lib/bdev/bdev_rpc.o 00:03:19.284 SO libspdk_init.so.6.0 00:03:19.284 CC lib/bdev/bdev_zone.o 00:03:19.284 SYMLINK libspdk_virtio.so 00:03:19.284 CC lib/bdev/part.o 00:03:19.284 SYMLINK libspdk_init.so 00:03:19.284 CC lib/bdev/scsi_nvme.o 00:03:19.284 LIB libspdk_fsdev.a 00:03:19.284 SO libspdk_fsdev.so.2.0 00:03:19.284 SYMLINK libspdk_fsdev.so 00:03:19.284 CC lib/event/app.o 00:03:19.284 CC lib/event/reactor.o 00:03:19.543 CC lib/event/log_rpc.o 00:03:19.543 CC lib/event/app_rpc.o 00:03:19.543 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:19.543 CC lib/event/scheduler_static.o 00:03:19.802 LIB libspdk_event.a 00:03:19.802 SO libspdk_event.so.14.0 00:03:19.802 SYMLINK libspdk_event.so 00:03:20.059 LIB libspdk_fuse_dispatcher.a 00:03:20.059 SO libspdk_fuse_dispatcher.so.1.0 00:03:20.059 SYMLINK libspdk_fuse_dispatcher.so 00:03:20.317 LIB libspdk_nvme.a 00:03:20.576 SO libspdk_nvme.so.15.0 00:03:20.576 LIB libspdk_blob.a 00:03:20.576 SYMLINK libspdk_nvme.so 00:03:20.576 SO libspdk_blob.so.11.0 00:03:20.834 SYMLINK libspdk_blob.so 00:03:20.834 CC lib/lvol/lvol.o 00:03:21.093 CC lib/blobfs/blobfs.o 00:03:21.093 CC lib/blobfs/tree.o 00:03:21.659 LIB libspdk_lvol.a 00:03:21.659 SO libspdk_lvol.so.10.0 00:03:21.659 LIB libspdk_bdev.a 00:03:21.918 LIB libspdk_blobfs.a 00:03:21.918 SO libspdk_bdev.so.17.0 00:03:21.918 SYMLINK libspdk_lvol.so 00:03:21.918 SO libspdk_blobfs.so.10.0 00:03:21.918 SYMLINK libspdk_blobfs.so 00:03:21.918 SYMLINK libspdk_bdev.so 00:03:22.176 CC lib/nvmf/ctrlr.o 00:03:22.176 CC lib/nvmf/ctrlr_discovery.o 00:03:22.176 CC lib/nvmf/ctrlr_bdev.o 00:03:22.176 CC lib/scsi/dev.o 00:03:22.176 CC lib/nvmf/nvmf.o 00:03:22.176 CC lib/scsi/lun.o 00:03:22.176 CC lib/nvmf/subsystem.o 00:03:22.176 CC lib/ftl/ftl_core.o 00:03:22.176 CC lib/ublk/ublk.o 00:03:22.176 CC lib/nbd/nbd.o 00:03:22.176 CC lib/nbd/nbd_rpc.o 00:03:22.434 CC lib/scsi/port.o 00:03:22.434 CC lib/ftl/ftl_init.o 00:03:22.434 CC lib/ftl/ftl_layout.o 00:03:22.434 CC lib/scsi/scsi.o 00:03:22.434 CC lib/scsi/scsi_bdev.o 00:03:22.434 LIB libspdk_nbd.a 00:03:22.434 SO libspdk_nbd.so.7.0 00:03:22.693 SYMLINK libspdk_nbd.so 00:03:22.693 CC lib/scsi/scsi_pr.o 00:03:22.693 CC lib/scsi/scsi_rpc.o 00:03:22.693 CC lib/scsi/task.o 00:03:22.693 CC lib/ublk/ublk_rpc.o 00:03:22.693 CC lib/ftl/ftl_debug.o 00:03:22.693 CC lib/nvmf/nvmf_rpc.o 00:03:22.693 CC lib/nvmf/transport.o 00:03:22.693 CC lib/nvmf/tcp.o 00:03:22.952 LIB libspdk_ublk.a 00:03:22.952 SO libspdk_ublk.so.3.0 00:03:22.952 SYMLINK libspdk_ublk.so 00:03:22.952 CC lib/nvmf/stubs.o 00:03:22.952 CC lib/nvmf/mdns_server.o 00:03:22.952 LIB libspdk_scsi.a 00:03:22.952 CC lib/ftl/ftl_io.o 00:03:22.952 CC lib/nvmf/rdma.o 00:03:22.952 SO libspdk_scsi.so.9.0 00:03:22.952 CC lib/ftl/ftl_sb.o 00:03:23.210 SYMLINK libspdk_scsi.so 00:03:23.210 CC lib/ftl/ftl_l2p.o 00:03:23.210 CC lib/nvmf/auth.o 00:03:23.210 CC lib/ftl/ftl_l2p_flat.o 00:03:23.210 CC lib/ftl/ftl_nv_cache.o 00:03:23.476 CC lib/iscsi/conn.o 00:03:23.476 CC lib/vhost/vhost.o 00:03:23.476 CC lib/ftl/ftl_band.o 00:03:23.476 CC lib/vhost/vhost_rpc.o 00:03:23.774 CC lib/ftl/ftl_band_ops.o 00:03:23.774 CC lib/iscsi/init_grp.o 00:03:23.774 CC lib/iscsi/iscsi.o 00:03:23.774 CC lib/vhost/vhost_scsi.o 00:03:23.774 CC lib/ftl/ftl_writer.o 00:03:24.032 CC lib/vhost/vhost_blk.o 00:03:24.032 CC lib/iscsi/param.o 00:03:24.032 CC lib/vhost/rte_vhost_user.o 00:03:24.032 CC lib/iscsi/portal_grp.o 00:03:24.032 CC lib/ftl/ftl_rq.o 00:03:24.291 CC lib/ftl/ftl_reloc.o 00:03:24.291 CC lib/ftl/ftl_l2p_cache.o 00:03:24.291 CC lib/iscsi/tgt_node.o 00:03:24.291 CC lib/ftl/ftl_p2l.o 00:03:24.291 CC lib/iscsi/iscsi_subsystem.o 00:03:24.549 CC lib/ftl/ftl_p2l_log.o 00:03:24.549 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.549 CC lib/iscsi/iscsi_rpc.o 00:03:24.549 CC lib/iscsi/task.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.806 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:25.064 LIB libspdk_iscsi.a 00:03:25.064 LIB libspdk_vhost.a 00:03:25.064 LIB libspdk_nvmf.a 00:03:25.064 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:25.064 SO libspdk_iscsi.so.8.0 00:03:25.064 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:25.064 CC lib/ftl/utils/ftl_conf.o 00:03:25.064 SO libspdk_vhost.so.8.0 00:03:25.064 CC lib/ftl/utils/ftl_md.o 00:03:25.064 CC lib/ftl/utils/ftl_mempool.o 00:03:25.064 SYMLINK libspdk_vhost.so 00:03:25.064 SYMLINK libspdk_iscsi.so 00:03:25.064 CC lib/ftl/utils/ftl_bitmap.o 00:03:25.064 CC lib/ftl/utils/ftl_property.o 00:03:25.064 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:25.064 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:25.064 SO libspdk_nvmf.so.20.0 00:03:25.322 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:25.322 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:25.322 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:25.322 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:25.322 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:25.322 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:25.322 SYMLINK libspdk_nvmf.so 00:03:25.322 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:25.322 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:25.322 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:25.322 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:25.322 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:25.579 CC lib/ftl/base/ftl_base_dev.o 00:03:25.579 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.579 CC lib/ftl/ftl_trace.o 00:03:25.836 LIB libspdk_ftl.a 00:03:25.836 SO libspdk_ftl.so.9.0 00:03:26.094 SYMLINK libspdk_ftl.so 00:03:26.351 CC module/env_dpdk/env_dpdk_rpc.o 00:03:26.609 CC module/keyring/file/keyring.o 00:03:26.609 CC module/blob/bdev/blob_bdev.o 00:03:26.609 CC module/accel/dsa/accel_dsa.o 00:03:26.609 CC module/accel/ioat/accel_ioat.o 00:03:26.609 CC module/accel/error/accel_error.o 00:03:26.609 CC module/keyring/linux/keyring.o 00:03:26.609 CC module/fsdev/aio/fsdev_aio.o 00:03:26.609 CC module/sock/posix/posix.o 00:03:26.609 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:26.609 LIB libspdk_env_dpdk_rpc.a 00:03:26.609 SO libspdk_env_dpdk_rpc.so.6.0 00:03:26.609 SYMLINK libspdk_env_dpdk_rpc.so 00:03:26.609 CC module/keyring/linux/keyring_rpc.o 00:03:26.609 CC module/keyring/file/keyring_rpc.o 00:03:26.609 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:26.609 CC module/accel/error/accel_error_rpc.o 00:03:26.609 LIB libspdk_scheduler_dynamic.a 00:03:26.609 CC module/accel/ioat/accel_ioat_rpc.o 00:03:26.609 SO libspdk_scheduler_dynamic.so.4.0 00:03:26.609 LIB libspdk_keyring_linux.a 00:03:26.609 CC module/accel/dsa/accel_dsa_rpc.o 00:03:26.609 LIB libspdk_keyring_file.a 00:03:26.867 SO libspdk_keyring_linux.so.1.0 00:03:26.867 SO libspdk_keyring_file.so.2.0 00:03:26.867 SYMLINK libspdk_scheduler_dynamic.so 00:03:26.867 LIB libspdk_blob_bdev.a 00:03:26.867 LIB libspdk_accel_error.a 00:03:26.867 CC module/fsdev/aio/linux_aio_mgr.o 00:03:26.867 SYMLINK libspdk_keyring_linux.so 00:03:26.867 SO libspdk_blob_bdev.so.11.0 00:03:26.867 LIB libspdk_accel_ioat.a 00:03:26.867 SYMLINK libspdk_keyring_file.so 00:03:26.867 SO libspdk_accel_error.so.2.0 00:03:26.867 SO libspdk_accel_ioat.so.6.0 00:03:26.867 LIB libspdk_accel_dsa.a 00:03:26.867 SYMLINK libspdk_blob_bdev.so 00:03:26.867 SYMLINK libspdk_accel_error.so 00:03:26.867 SO libspdk_accel_dsa.so.5.0 00:03:26.867 SYMLINK libspdk_accel_ioat.so 00:03:26.867 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:26.867 SYMLINK libspdk_accel_dsa.so 00:03:26.867 CC module/scheduler/gscheduler/gscheduler.o 00:03:26.867 CC module/accel/iaa/accel_iaa.o 00:03:27.125 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.125 CC module/bdev/delay/vbdev_delay.o 00:03:27.125 CC module/bdev/error/vbdev_error.o 00:03:27.125 CC module/bdev/gpt/gpt.o 00:03:27.125 LIB libspdk_fsdev_aio.a 00:03:27.125 LIB libspdk_scheduler_gscheduler.a 00:03:27.125 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:27.125 CC module/blobfs/bdev/blobfs_bdev.o 00:03:27.125 SO libspdk_scheduler_gscheduler.so.4.0 00:03:27.125 CC module/bdev/lvol/vbdev_lvol.o 00:03:27.125 SO libspdk_fsdev_aio.so.1.0 00:03:27.125 SYMLINK libspdk_scheduler_gscheduler.so 00:03:27.125 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:27.125 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:27.125 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:27.125 SYMLINK libspdk_fsdev_aio.so 00:03:27.125 CC module/bdev/gpt/vbdev_gpt.o 00:03:27.125 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.125 CC module/bdev/error/vbdev_error_rpc.o 00:03:27.383 LIB libspdk_sock_posix.a 00:03:27.383 LIB libspdk_blobfs_bdev.a 00:03:27.383 LIB libspdk_accel_iaa.a 00:03:27.383 SO libspdk_sock_posix.so.6.0 00:03:27.383 SO libspdk_blobfs_bdev.so.6.0 00:03:27.383 CC module/bdev/malloc/bdev_malloc.o 00:03:27.383 SO libspdk_accel_iaa.so.3.0 00:03:27.383 LIB libspdk_bdev_delay.a 00:03:27.383 SYMLINK libspdk_blobfs_bdev.so 00:03:27.383 CC module/bdev/null/bdev_null.o 00:03:27.383 SYMLINK libspdk_sock_posix.so 00:03:27.383 SO libspdk_bdev_delay.so.6.0 00:03:27.383 SYMLINK libspdk_accel_iaa.so 00:03:27.383 CC module/bdev/null/bdev_null_rpc.o 00:03:27.383 LIB libspdk_bdev_gpt.a 00:03:27.383 CC module/bdev/nvme/bdev_nvme.o 00:03:27.383 LIB libspdk_bdev_error.a 00:03:27.383 SO libspdk_bdev_gpt.so.6.0 00:03:27.383 SYMLINK libspdk_bdev_delay.so 00:03:27.383 SO libspdk_bdev_error.so.6.0 00:03:27.383 SYMLINK libspdk_bdev_gpt.so 00:03:27.383 SYMLINK libspdk_bdev_error.so 00:03:27.383 CC module/bdev/passthru/vbdev_passthru.o 00:03:27.383 CC module/bdev/raid/bdev_raid.o 00:03:27.642 CC module/bdev/raid/bdev_raid_rpc.o 00:03:27.642 CC module/bdev/split/vbdev_split.o 00:03:27.642 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:27.642 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:27.642 CC module/bdev/xnvme/bdev_xnvme.o 00:03:27.642 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:27.642 LIB libspdk_bdev_null.a 00:03:27.642 SO libspdk_bdev_null.so.6.0 00:03:27.642 SYMLINK libspdk_bdev_null.so 00:03:27.642 CC module/bdev/split/vbdev_split_rpc.o 00:03:27.642 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:27.642 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:27.642 LIB libspdk_bdev_malloc.a 00:03:27.642 SO libspdk_bdev_malloc.so.6.0 00:03:27.899 LIB libspdk_bdev_split.a 00:03:27.899 SYMLINK libspdk_bdev_malloc.so 00:03:27.899 CC module/bdev/raid/bdev_raid_sb.o 00:03:27.899 SO libspdk_bdev_split.so.6.0 00:03:27.899 LIB libspdk_bdev_passthru.a 00:03:27.899 SO libspdk_bdev_passthru.so.6.0 00:03:27.899 LIB libspdk_bdev_lvol.a 00:03:27.899 LIB libspdk_bdev_xnvme.a 00:03:27.899 SYMLINK libspdk_bdev_split.so 00:03:27.899 SO libspdk_bdev_lvol.so.6.0 00:03:27.899 SO libspdk_bdev_xnvme.so.3.0 00:03:27.899 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:27.899 CC module/bdev/aio/bdev_aio.o 00:03:27.899 SYMLINK libspdk_bdev_passthru.so 00:03:27.899 CC module/bdev/raid/raid0.o 00:03:27.899 CC module/bdev/ftl/bdev_ftl.o 00:03:27.899 SYMLINK libspdk_bdev_xnvme.so 00:03:27.899 SYMLINK libspdk_bdev_lvol.so 00:03:27.899 CC module/bdev/aio/bdev_aio_rpc.o 00:03:27.899 CC module/bdev/raid/raid1.o 00:03:28.157 LIB libspdk_bdev_zone_block.a 00:03:28.157 SO libspdk_bdev_zone_block.so.6.0 00:03:28.157 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.157 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.157 CC module/bdev/raid/concat.o 00:03:28.157 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.157 SYMLINK libspdk_bdev_zone_block.so 00:03:28.157 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.157 LIB libspdk_bdev_aio.a 00:03:28.157 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.157 SO libspdk_bdev_aio.so.6.0 00:03:28.157 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.157 SYMLINK libspdk_bdev_aio.so 00:03:28.157 LIB libspdk_bdev_ftl.a 00:03:28.157 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.157 SO libspdk_bdev_ftl.so.6.0 00:03:28.414 CC module/bdev/nvme/nvme_rpc.o 00:03:28.414 SYMLINK libspdk_bdev_ftl.so 00:03:28.414 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.414 LIB libspdk_bdev_raid.a 00:03:28.414 CC module/bdev/nvme/vbdev_opal.o 00:03:28.414 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:28.414 SO libspdk_bdev_raid.so.6.0 00:03:28.415 LIB libspdk_bdev_iscsi.a 00:03:28.415 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:28.415 SO libspdk_bdev_iscsi.so.6.0 00:03:28.415 SYMLINK libspdk_bdev_raid.so 00:03:28.415 SYMLINK libspdk_bdev_iscsi.so 00:03:28.672 LIB libspdk_bdev_virtio.a 00:03:28.672 SO libspdk_bdev_virtio.so.6.0 00:03:28.672 SYMLINK libspdk_bdev_virtio.so 00:03:30.041 LIB libspdk_bdev_nvme.a 00:03:30.041 SO libspdk_bdev_nvme.so.7.1 00:03:30.041 SYMLINK libspdk_bdev_nvme.so 00:03:30.606 CC module/event/subsystems/sock/sock.o 00:03:30.606 CC module/event/subsystems/iobuf/iobuf.o 00:03:30.606 CC module/event/subsystems/vmd/vmd.o 00:03:30.606 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:30.606 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:30.606 CC module/event/subsystems/keyring/keyring.o 00:03:30.606 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:30.606 CC module/event/subsystems/scheduler/scheduler.o 00:03:30.606 CC module/event/subsystems/fsdev/fsdev.o 00:03:30.606 LIB libspdk_event_sock.a 00:03:30.607 LIB libspdk_event_keyring.a 00:03:30.607 LIB libspdk_event_vmd.a 00:03:30.607 SO libspdk_event_sock.so.5.0 00:03:30.607 LIB libspdk_event_iobuf.a 00:03:30.607 LIB libspdk_event_fsdev.a 00:03:30.607 SO libspdk_event_keyring.so.1.0 00:03:30.607 SO libspdk_event_vmd.so.6.0 00:03:30.607 SO libspdk_event_fsdev.so.1.0 00:03:30.607 SO libspdk_event_iobuf.so.3.0 00:03:30.607 LIB libspdk_event_scheduler.a 00:03:30.607 LIB libspdk_event_vhost_blk.a 00:03:30.607 SYMLINK libspdk_event_keyring.so 00:03:30.607 SYMLINK libspdk_event_sock.so 00:03:30.607 SO libspdk_event_scheduler.so.4.0 00:03:30.607 SO libspdk_event_vhost_blk.so.3.0 00:03:30.607 SYMLINK libspdk_event_vmd.so 00:03:30.607 SYMLINK libspdk_event_fsdev.so 00:03:30.607 SYMLINK libspdk_event_iobuf.so 00:03:30.607 SYMLINK libspdk_event_vhost_blk.so 00:03:30.607 SYMLINK libspdk_event_scheduler.so 00:03:30.865 CC module/event/subsystems/accel/accel.o 00:03:31.123 LIB libspdk_event_accel.a 00:03:31.123 SO libspdk_event_accel.so.6.0 00:03:31.123 SYMLINK libspdk_event_accel.so 00:03:31.381 CC module/event/subsystems/bdev/bdev.o 00:03:31.381 LIB libspdk_event_bdev.a 00:03:31.638 SO libspdk_event_bdev.so.6.0 00:03:31.638 SYMLINK libspdk_event_bdev.so 00:03:31.638 CC module/event/subsystems/scsi/scsi.o 00:03:31.638 CC module/event/subsystems/ublk/ublk.o 00:03:31.638 CC module/event/subsystems/nbd/nbd.o 00:03:31.638 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:31.638 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:31.896 LIB libspdk_event_ublk.a 00:03:31.896 LIB libspdk_event_scsi.a 00:03:31.896 LIB libspdk_event_nbd.a 00:03:31.896 SO libspdk_event_scsi.so.6.0 00:03:31.896 SO libspdk_event_ublk.so.3.0 00:03:31.896 SO libspdk_event_nbd.so.6.0 00:03:31.896 SYMLINK libspdk_event_nbd.so 00:03:31.896 SYMLINK libspdk_event_scsi.so 00:03:31.896 SYMLINK libspdk_event_ublk.so 00:03:31.896 LIB libspdk_event_nvmf.a 00:03:31.896 SO libspdk_event_nvmf.so.6.0 00:03:32.154 SYMLINK libspdk_event_nvmf.so 00:03:32.154 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:32.154 CC module/event/subsystems/iscsi/iscsi.o 00:03:32.154 LIB libspdk_event_vhost_scsi.a 00:03:32.154 SO libspdk_event_vhost_scsi.so.3.0 00:03:32.154 LIB libspdk_event_iscsi.a 00:03:32.154 SO libspdk_event_iscsi.so.6.0 00:03:32.154 SYMLINK libspdk_event_vhost_scsi.so 00:03:32.411 SYMLINK libspdk_event_iscsi.so 00:03:32.411 SO libspdk.so.6.0 00:03:32.411 SYMLINK libspdk.so 00:03:32.670 CC app/trace_record/trace_record.o 00:03:32.670 TEST_HEADER include/spdk/accel.h 00:03:32.670 TEST_HEADER include/spdk/accel_module.h 00:03:32.670 TEST_HEADER include/spdk/assert.h 00:03:32.670 TEST_HEADER include/spdk/barrier.h 00:03:32.670 TEST_HEADER include/spdk/base64.h 00:03:32.670 TEST_HEADER include/spdk/bdev.h 00:03:32.670 TEST_HEADER include/spdk/bdev_module.h 00:03:32.670 CXX app/trace/trace.o 00:03:32.670 TEST_HEADER include/spdk/bdev_zone.h 00:03:32.670 TEST_HEADER include/spdk/bit_array.h 00:03:32.670 TEST_HEADER include/spdk/bit_pool.h 00:03:32.670 TEST_HEADER include/spdk/blob_bdev.h 00:03:32.670 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:32.670 TEST_HEADER include/spdk/blobfs.h 00:03:32.670 TEST_HEADER include/spdk/blob.h 00:03:32.670 TEST_HEADER include/spdk/conf.h 00:03:32.670 TEST_HEADER include/spdk/config.h 00:03:32.670 TEST_HEADER include/spdk/cpuset.h 00:03:32.670 TEST_HEADER include/spdk/crc16.h 00:03:32.670 TEST_HEADER include/spdk/crc32.h 00:03:32.670 TEST_HEADER include/spdk/crc64.h 00:03:32.670 TEST_HEADER include/spdk/dif.h 00:03:32.670 TEST_HEADER include/spdk/dma.h 00:03:32.670 TEST_HEADER include/spdk/endian.h 00:03:32.670 TEST_HEADER include/spdk/env_dpdk.h 00:03:32.670 TEST_HEADER include/spdk/env.h 00:03:32.670 TEST_HEADER include/spdk/event.h 00:03:32.670 TEST_HEADER include/spdk/fd_group.h 00:03:32.670 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.670 TEST_HEADER include/spdk/fd.h 00:03:32.670 TEST_HEADER include/spdk/file.h 00:03:32.670 TEST_HEADER include/spdk/fsdev.h 00:03:32.670 TEST_HEADER include/spdk/fsdev_module.h 00:03:32.670 TEST_HEADER include/spdk/ftl.h 00:03:32.670 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:32.670 TEST_HEADER include/spdk/gpt_spec.h 00:03:32.670 TEST_HEADER include/spdk/hexlify.h 00:03:32.670 TEST_HEADER include/spdk/histogram_data.h 00:03:32.670 TEST_HEADER include/spdk/idxd.h 00:03:32.670 TEST_HEADER include/spdk/idxd_spec.h 00:03:32.670 TEST_HEADER include/spdk/init.h 00:03:32.670 TEST_HEADER include/spdk/ioat.h 00:03:32.670 TEST_HEADER include/spdk/ioat_spec.h 00:03:32.670 TEST_HEADER include/spdk/iscsi_spec.h 00:03:32.670 TEST_HEADER include/spdk/json.h 00:03:32.670 CC examples/ioat/perf/perf.o 00:03:32.670 TEST_HEADER include/spdk/jsonrpc.h 00:03:32.670 CC test/thread/poller_perf/poller_perf.o 00:03:32.670 TEST_HEADER include/spdk/keyring.h 00:03:32.670 CC examples/util/zipf/zipf.o 00:03:32.670 TEST_HEADER include/spdk/keyring_module.h 00:03:32.670 TEST_HEADER include/spdk/likely.h 00:03:32.670 TEST_HEADER include/spdk/log.h 00:03:32.670 TEST_HEADER include/spdk/lvol.h 00:03:32.670 TEST_HEADER include/spdk/md5.h 00:03:32.670 TEST_HEADER include/spdk/memory.h 00:03:32.670 TEST_HEADER include/spdk/mmio.h 00:03:32.670 TEST_HEADER include/spdk/nbd.h 00:03:32.670 TEST_HEADER include/spdk/net.h 00:03:32.670 TEST_HEADER include/spdk/notify.h 00:03:32.670 TEST_HEADER include/spdk/nvme.h 00:03:32.670 TEST_HEADER include/spdk/nvme_intel.h 00:03:32.670 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:32.670 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:32.670 CC test/dma/test_dma/test_dma.o 00:03:32.670 TEST_HEADER include/spdk/nvme_spec.h 00:03:32.670 TEST_HEADER include/spdk/nvme_zns.h 00:03:32.670 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:32.670 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:32.670 TEST_HEADER include/spdk/nvmf.h 00:03:32.670 TEST_HEADER include/spdk/nvmf_spec.h 00:03:32.670 TEST_HEADER include/spdk/nvmf_transport.h 00:03:32.670 TEST_HEADER include/spdk/opal.h 00:03:32.670 TEST_HEADER include/spdk/opal_spec.h 00:03:32.670 TEST_HEADER include/spdk/pci_ids.h 00:03:32.670 CC test/app/bdev_svc/bdev_svc.o 00:03:32.670 TEST_HEADER include/spdk/pipe.h 00:03:32.670 TEST_HEADER include/spdk/queue.h 00:03:32.670 TEST_HEADER include/spdk/reduce.h 00:03:32.670 TEST_HEADER include/spdk/rpc.h 00:03:32.670 TEST_HEADER include/spdk/scheduler.h 00:03:32.670 TEST_HEADER include/spdk/scsi.h 00:03:32.670 TEST_HEADER include/spdk/scsi_spec.h 00:03:32.670 TEST_HEADER include/spdk/sock.h 00:03:32.670 TEST_HEADER include/spdk/stdinc.h 00:03:32.670 TEST_HEADER include/spdk/string.h 00:03:32.670 TEST_HEADER include/spdk/thread.h 00:03:32.670 TEST_HEADER include/spdk/trace.h 00:03:32.670 TEST_HEADER include/spdk/trace_parser.h 00:03:32.670 TEST_HEADER include/spdk/tree.h 00:03:32.670 TEST_HEADER include/spdk/ublk.h 00:03:32.670 TEST_HEADER include/spdk/util.h 00:03:32.670 TEST_HEADER include/spdk/uuid.h 00:03:32.670 TEST_HEADER include/spdk/version.h 00:03:32.670 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:32.670 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:32.670 TEST_HEADER include/spdk/vhost.h 00:03:32.670 TEST_HEADER include/spdk/vmd.h 00:03:32.670 TEST_HEADER include/spdk/xor.h 00:03:32.670 TEST_HEADER include/spdk/zipf.h 00:03:32.670 CXX test/cpp_headers/accel.o 00:03:32.670 CC test/env/mem_callbacks/mem_callbacks.o 00:03:32.928 LINK interrupt_tgt 00:03:32.928 LINK poller_perf 00:03:32.928 LINK zipf 00:03:32.928 LINK spdk_trace_record 00:03:32.928 CXX test/cpp_headers/accel_module.o 00:03:32.928 LINK bdev_svc 00:03:32.928 LINK ioat_perf 00:03:32.928 LINK spdk_trace 00:03:32.928 CXX test/cpp_headers/assert.o 00:03:32.928 CXX test/cpp_headers/barrier.o 00:03:32.928 CC test/env/vtophys/vtophys.o 00:03:32.928 CC test/rpc_client/rpc_client_test.o 00:03:33.186 CC examples/ioat/verify/verify.o 00:03:33.186 LINK test_dma 00:03:33.186 LINK vtophys 00:03:33.186 CXX test/cpp_headers/base64.o 00:03:33.186 CC test/app/histogram_perf/histogram_perf.o 00:03:33.186 LINK mem_callbacks 00:03:33.186 CC examples/thread/thread/thread_ex.o 00:03:33.186 CC app/nvmf_tgt/nvmf_main.o 00:03:33.186 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:33.186 LINK rpc_client_test 00:03:33.186 LINK histogram_perf 00:03:33.186 CXX test/cpp_headers/bdev.o 00:03:33.186 CXX test/cpp_headers/bdev_module.o 00:03:33.186 LINK verify 00:03:33.444 CXX test/cpp_headers/bdev_zone.o 00:03:33.444 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.444 CXX test/cpp_headers/bit_array.o 00:03:33.444 LINK nvmf_tgt 00:03:33.444 CXX test/cpp_headers/bit_pool.o 00:03:33.444 LINK thread 00:03:33.444 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.444 CC test/env/memory/memory_ut.o 00:03:33.444 CC test/env/pci/pci_ut.o 00:03:33.444 CXX test/cpp_headers/blob_bdev.o 00:03:33.444 LINK env_dpdk_post_init 00:03:33.444 CC test/app/jsoncat/jsoncat.o 00:03:33.444 CC test/app/stub/stub.o 00:03:33.444 LINK nvme_fuzz 00:03:33.702 LINK iscsi_tgt 00:03:33.702 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.702 LINK jsoncat 00:03:33.702 LINK stub 00:03:33.702 CC examples/sock/hello_world/hello_sock.o 00:03:33.702 CC app/spdk_tgt/spdk_tgt.o 00:03:33.702 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:33.702 CXX test/cpp_headers/blobfs.o 00:03:33.702 CC test/event/event_perf/event_perf.o 00:03:33.960 CC app/spdk_lspci/spdk_lspci.o 00:03:33.960 LINK pci_ut 00:03:33.960 LINK spdk_tgt 00:03:33.960 CC app/spdk_nvme_perf/perf.o 00:03:33.960 LINK hello_sock 00:03:33.960 CXX test/cpp_headers/blob.o 00:03:33.960 LINK event_perf 00:03:33.960 LINK spdk_lspci 00:03:33.960 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.218 CXX test/cpp_headers/conf.o 00:03:34.218 CC examples/vmd/led/led.o 00:03:34.218 LINK lsvmd 00:03:34.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.218 CC test/event/reactor/reactor.o 00:03:34.218 CC app/spdk_nvme_identify/identify.o 00:03:34.218 CC test/nvme/aer/aer.o 00:03:34.218 CXX test/cpp_headers/config.o 00:03:34.218 CXX test/cpp_headers/cpuset.o 00:03:34.218 LINK led 00:03:34.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.218 LINK reactor 00:03:34.477 CXX test/cpp_headers/crc16.o 00:03:34.477 CC test/accel/dif/dif.o 00:03:34.477 CXX test/cpp_headers/crc32.o 00:03:34.477 LINK aer 00:03:34.477 LINK memory_ut 00:03:34.477 CC test/event/reactor_perf/reactor_perf.o 00:03:34.478 CC examples/idxd/perf/perf.o 00:03:34.478 CXX test/cpp_headers/crc64.o 00:03:34.478 LINK vhost_fuzz 00:03:34.736 LINK reactor_perf 00:03:34.736 CXX test/cpp_headers/dif.o 00:03:34.736 CC test/nvme/reset/reset.o 00:03:34.736 CC test/nvme/sgl/sgl.o 00:03:34.736 CC test/nvme/e2edp/nvme_dp.o 00:03:34.736 LINK spdk_nvme_perf 00:03:34.736 CXX test/cpp_headers/dma.o 00:03:34.994 CC test/event/app_repeat/app_repeat.o 00:03:34.995 LINK idxd_perf 00:03:34.995 LINK spdk_nvme_identify 00:03:34.995 LINK reset 00:03:34.995 LINK sgl 00:03:34.995 CXX test/cpp_headers/endian.o 00:03:34.995 LINK dif 00:03:34.995 LINK app_repeat 00:03:34.995 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.995 LINK nvme_dp 00:03:34.995 CC app/spdk_top/spdk_top.o 00:03:34.995 CXX test/cpp_headers/env_dpdk.o 00:03:35.253 CC app/vhost/vhost.o 00:03:35.253 CC app/spdk_dd/spdk_dd.o 00:03:35.253 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:35.253 LINK iscsi_fuzz 00:03:35.253 CXX test/cpp_headers/env.o 00:03:35.253 LINK spdk_nvme_discover 00:03:35.253 CC test/nvme/overhead/overhead.o 00:03:35.253 CC test/event/scheduler/scheduler.o 00:03:35.253 LINK vhost 00:03:35.253 CC examples/accel/perf/accel_perf.o 00:03:35.253 CXX test/cpp_headers/event.o 00:03:35.511 LINK hello_fsdev 00:03:35.511 LINK overhead 00:03:35.511 LINK spdk_dd 00:03:35.511 LINK scheduler 00:03:35.511 CC app/fio/nvme/fio_plugin.o 00:03:35.511 CXX test/cpp_headers/fd_group.o 00:03:35.511 CC test/blobfs/mkfs/mkfs.o 00:03:35.511 CC test/nvme/err_injection/err_injection.o 00:03:35.769 CC test/nvme/startup/startup.o 00:03:35.769 CXX test/cpp_headers/fd.o 00:03:35.769 CC examples/blob/hello_world/hello_blob.o 00:03:35.769 LINK mkfs 00:03:35.769 LINK err_injection 00:03:35.769 CC app/fio/bdev/fio_plugin.o 00:03:35.769 LINK accel_perf 00:03:35.769 LINK startup 00:03:35.769 CXX test/cpp_headers/file.o 00:03:35.769 CC test/lvol/esnap/esnap.o 00:03:35.769 LINK hello_blob 00:03:36.027 LINK spdk_top 00:03:36.027 CXX test/cpp_headers/fsdev.o 00:03:36.027 CC test/nvme/reserve/reserve.o 00:03:36.027 CC examples/nvme/hello_world/hello_world.o 00:03:36.027 CC test/bdev/bdevio/bdevio.o 00:03:36.027 CXX test/cpp_headers/fsdev_module.o 00:03:36.027 CXX test/cpp_headers/ftl.o 00:03:36.027 CC examples/bdev/hello_world/hello_bdev.o 00:03:36.027 LINK spdk_nvme 00:03:36.027 CC examples/blob/cli/blobcli.o 00:03:36.027 CXX test/cpp_headers/fuse_dispatcher.o 00:03:36.285 LINK reserve 00:03:36.285 LINK hello_world 00:03:36.285 CC test/nvme/simple_copy/simple_copy.o 00:03:36.285 LINK spdk_bdev 00:03:36.285 CC test/nvme/connect_stress/connect_stress.o 00:03:36.285 CXX test/cpp_headers/gpt_spec.o 00:03:36.285 LINK hello_bdev 00:03:36.285 LINK bdevio 00:03:36.285 CC examples/nvme/reconnect/reconnect.o 00:03:36.285 CC test/nvme/boot_partition/boot_partition.o 00:03:36.285 CXX test/cpp_headers/hexlify.o 00:03:36.285 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:36.543 LINK connect_stress 00:03:36.543 LINK simple_copy 00:03:36.543 CXX test/cpp_headers/histogram_data.o 00:03:36.543 LINK blobcli 00:03:36.543 LINK boot_partition 00:03:36.543 CC test/nvme/compliance/nvme_compliance.o 00:03:36.543 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.543 CXX test/cpp_headers/idxd.o 00:03:36.543 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.543 CXX test/cpp_headers/idxd_spec.o 00:03:36.802 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:36.802 LINK reconnect 00:03:36.802 CC test/nvme/fdp/fdp.o 00:03:36.802 LINK fused_ordering 00:03:36.802 CXX test/cpp_headers/init.o 00:03:36.802 CC test/nvme/cuse/cuse.o 00:03:36.802 LINK nvme_compliance 00:03:36.802 LINK doorbell_aers 00:03:36.802 CC examples/nvme/arbitration/arbitration.o 00:03:36.802 CXX test/cpp_headers/ioat.o 00:03:36.802 LINK nvme_manage 00:03:37.060 CC examples/nvme/hotplug/hotplug.o 00:03:37.060 CXX test/cpp_headers/ioat_spec.o 00:03:37.060 CXX test/cpp_headers/iscsi_spec.o 00:03:37.060 LINK fdp 00:03:37.060 CXX test/cpp_headers/json.o 00:03:37.060 CXX test/cpp_headers/jsonrpc.o 00:03:37.060 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:37.060 CXX test/cpp_headers/keyring.o 00:03:37.060 LINK arbitration 00:03:37.060 LINK hotplug 00:03:37.318 CXX test/cpp_headers/keyring_module.o 00:03:37.318 CC examples/nvme/abort/abort.o 00:03:37.318 CXX test/cpp_headers/likely.o 00:03:37.318 CXX test/cpp_headers/log.o 00:03:37.318 LINK cmb_copy 00:03:37.318 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:37.318 CXX test/cpp_headers/lvol.o 00:03:37.318 CXX test/cpp_headers/md5.o 00:03:37.318 CXX test/cpp_headers/memory.o 00:03:37.318 LINK bdevperf 00:03:37.318 CXX test/cpp_headers/mmio.o 00:03:37.318 CXX test/cpp_headers/nbd.o 00:03:37.318 CXX test/cpp_headers/net.o 00:03:37.318 CXX test/cpp_headers/notify.o 00:03:37.318 LINK pmr_persistence 00:03:37.576 CXX test/cpp_headers/nvme.o 00:03:37.576 CXX test/cpp_headers/nvme_intel.o 00:03:37.576 LINK abort 00:03:37.576 CXX test/cpp_headers/nvme_ocssd.o 00:03:37.576 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:37.576 CXX test/cpp_headers/nvme_spec.o 00:03:37.576 CXX test/cpp_headers/nvme_zns.o 00:03:37.576 CXX test/cpp_headers/nvmf_cmd.o 00:03:37.576 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:37.576 CXX test/cpp_headers/nvmf.o 00:03:37.576 CXX test/cpp_headers/nvmf_spec.o 00:03:37.576 CXX test/cpp_headers/nvmf_transport.o 00:03:37.576 CXX test/cpp_headers/opal.o 00:03:37.576 CXX test/cpp_headers/opal_spec.o 00:03:37.833 CXX test/cpp_headers/pci_ids.o 00:03:37.833 CXX test/cpp_headers/pipe.o 00:03:37.833 CXX test/cpp_headers/queue.o 00:03:37.833 CXX test/cpp_headers/reduce.o 00:03:37.833 CC examples/nvmf/nvmf/nvmf.o 00:03:37.833 LINK cuse 00:03:37.833 CXX test/cpp_headers/rpc.o 00:03:37.833 CXX test/cpp_headers/scheduler.o 00:03:37.833 CXX test/cpp_headers/scsi.o 00:03:37.833 CXX test/cpp_headers/scsi_spec.o 00:03:37.833 CXX test/cpp_headers/sock.o 00:03:37.833 CXX test/cpp_headers/stdinc.o 00:03:37.833 CXX test/cpp_headers/string.o 00:03:37.833 CXX test/cpp_headers/thread.o 00:03:37.833 CXX test/cpp_headers/trace.o 00:03:37.833 CXX test/cpp_headers/trace_parser.o 00:03:37.833 CXX test/cpp_headers/tree.o 00:03:38.092 CXX test/cpp_headers/ublk.o 00:03:38.092 CXX test/cpp_headers/util.o 00:03:38.092 CXX test/cpp_headers/uuid.o 00:03:38.092 CXX test/cpp_headers/version.o 00:03:38.092 CXX test/cpp_headers/vfio_user_pci.o 00:03:38.092 CXX test/cpp_headers/vfio_user_spec.o 00:03:38.092 CXX test/cpp_headers/vhost.o 00:03:38.092 CXX test/cpp_headers/vmd.o 00:03:38.092 LINK nvmf 00:03:38.092 CXX test/cpp_headers/xor.o 00:03:38.092 CXX test/cpp_headers/zipf.o 00:03:41.374 LINK esnap 00:03:41.374 00:03:41.374 real 1m10.682s 00:03:41.374 user 6m20.846s 00:03:41.374 sys 1m12.236s 00:03:41.374 06:27:33 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:41.374 06:27:33 make -- common/autotest_common.sh@10 -- $ set +x 00:03:41.374 ************************************ 00:03:41.374 END TEST make 00:03:41.374 ************************************ 00:03:41.374 06:27:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:41.374 06:27:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:41.374 06:27:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:41.374 06:27:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.374 06:27:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:41.374 06:27:33 -- pm/common@44 -- $ pid=5069 00:03:41.374 06:27:33 -- pm/common@50 -- $ kill -TERM 5069 00:03:41.374 06:27:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.374 06:27:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:41.374 06:27:33 -- pm/common@44 -- $ pid=5070 00:03:41.374 06:27:33 -- pm/common@50 -- $ kill -TERM 5070 00:03:41.374 06:27:33 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:41.374 06:27:33 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:41.374 06:27:33 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:41.374 06:27:33 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:41.374 06:27:33 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:41.374 06:27:33 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:41.374 06:27:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.374 06:27:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.374 06:27:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.374 06:27:33 -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.374 06:27:33 -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.374 06:27:33 -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.374 06:27:33 -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.374 06:27:33 -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.374 06:27:33 -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.374 06:27:33 -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.374 06:27:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.374 06:27:33 -- scripts/common.sh@344 -- # case "$op" in 00:03:41.374 06:27:33 -- scripts/common.sh@345 -- # : 1 00:03:41.374 06:27:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.374 06:27:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.374 06:27:33 -- scripts/common.sh@365 -- # decimal 1 00:03:41.374 06:27:33 -- scripts/common.sh@353 -- # local d=1 00:03:41.374 06:27:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.374 06:27:33 -- scripts/common.sh@355 -- # echo 1 00:03:41.374 06:27:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.633 06:27:33 -- scripts/common.sh@366 -- # decimal 2 00:03:41.633 06:27:33 -- scripts/common.sh@353 -- # local d=2 00:03:41.633 06:27:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.633 06:27:33 -- scripts/common.sh@355 -- # echo 2 00:03:41.633 06:27:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.633 06:27:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.633 06:27:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.633 06:27:33 -- scripts/common.sh@368 -- # return 0 00:03:41.633 06:27:33 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.633 06:27:33 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.633 --rc genhtml_branch_coverage=1 00:03:41.633 --rc genhtml_function_coverage=1 00:03:41.633 --rc genhtml_legend=1 00:03:41.633 --rc geninfo_all_blocks=1 00:03:41.633 --rc geninfo_unexecuted_blocks=1 00:03:41.633 00:03:41.633 ' 00:03:41.633 06:27:33 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.633 --rc genhtml_branch_coverage=1 00:03:41.633 --rc genhtml_function_coverage=1 00:03:41.633 --rc genhtml_legend=1 00:03:41.633 --rc geninfo_all_blocks=1 00:03:41.633 --rc geninfo_unexecuted_blocks=1 00:03:41.633 00:03:41.633 ' 00:03:41.633 06:27:33 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.633 --rc genhtml_branch_coverage=1 00:03:41.633 --rc genhtml_function_coverage=1 00:03:41.633 --rc genhtml_legend=1 00:03:41.633 --rc geninfo_all_blocks=1 00:03:41.633 --rc geninfo_unexecuted_blocks=1 00:03:41.633 00:03:41.633 ' 00:03:41.633 06:27:33 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.633 --rc genhtml_branch_coverage=1 00:03:41.633 --rc genhtml_function_coverage=1 00:03:41.633 --rc genhtml_legend=1 00:03:41.633 --rc geninfo_all_blocks=1 00:03:41.633 --rc geninfo_unexecuted_blocks=1 00:03:41.633 00:03:41.633 ' 00:03:41.633 06:27:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:41.633 06:27:33 -- nvmf/common.sh@7 -- # uname -s 00:03:41.633 06:27:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.633 06:27:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.633 06:27:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.633 06:27:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.633 06:27:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.633 06:27:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.633 06:27:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.633 06:27:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.633 06:27:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.633 06:27:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.633 06:27:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcd17b17-72e5-4db9-b5dd-4e7cd1a93bdd 00:03:41.633 06:27:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcd17b17-72e5-4db9-b5dd-4e7cd1a93bdd 00:03:41.633 06:27:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.633 06:27:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.633 06:27:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:41.633 06:27:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.633 06:27:33 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:41.633 06:27:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:41.633 06:27:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.633 06:27:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.633 06:27:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.633 06:27:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.633 06:27:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.633 06:27:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.633 06:27:33 -- paths/export.sh@5 -- # export PATH 00:03:41.633 06:27:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.633 06:27:33 -- nvmf/common.sh@51 -- # : 0 00:03:41.633 06:27:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:41.633 06:27:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:41.633 06:27:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.633 06:27:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.633 06:27:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.633 06:27:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:41.633 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:41.633 06:27:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:41.633 06:27:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:41.633 06:27:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:41.633 06:27:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:41.633 06:27:33 -- spdk/autotest.sh@32 -- # uname -s 00:03:41.633 06:27:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:41.633 06:27:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:41.633 06:27:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.633 06:27:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:41.633 06:27:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.633 06:27:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:41.633 06:27:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:41.633 06:27:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:41.633 06:27:33 -- spdk/autotest.sh@48 -- # udevadm_pid=54315 00:03:41.633 06:27:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:41.633 06:27:33 -- pm/common@17 -- # local monitor 00:03:41.633 06:27:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.633 06:27:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:41.633 06:27:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.633 06:27:33 -- pm/common@25 -- # sleep 1 00:03:41.633 06:27:33 -- pm/common@21 -- # date +%s 00:03:41.633 06:27:33 -- pm/common@21 -- # date +%s 00:03:41.633 06:27:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731997653 00:03:41.634 06:27:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731997653 00:03:41.634 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731997653_collect-cpu-load.pm.log 00:03:41.634 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731997653_collect-vmstat.pm.log 00:03:42.574 06:27:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.574 06:27:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:42.574 06:27:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.574 06:27:34 -- common/autotest_common.sh@10 -- # set +x 00:03:42.574 06:27:34 -- spdk/autotest.sh@59 -- # create_test_list 00:03:42.574 06:27:34 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:42.574 06:27:34 -- common/autotest_common.sh@10 -- # set +x 00:03:42.574 06:27:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:42.574 06:27:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:42.574 06:27:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:42.574 06:27:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:42.574 06:27:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:42.574 06:27:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:42.574 06:27:34 -- common/autotest_common.sh@1457 -- # uname 00:03:42.574 06:27:34 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:42.574 06:27:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:42.574 06:27:34 -- common/autotest_common.sh@1477 -- # uname 00:03:42.574 06:27:34 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:42.574 06:27:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:42.574 06:27:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:42.831 lcov: LCOV version 1.15 00:03:42.831 06:27:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:57.799 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:57.799 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:12.678 06:28:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:12.678 06:28:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.678 06:28:02 -- common/autotest_common.sh@10 -- # set +x 00:04:12.678 06:28:02 -- spdk/autotest.sh@78 -- # rm -f 00:04:12.678 06:28:02 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.678 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:12.678 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:12.678 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:12.678 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:12.678 06:28:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:12.678 06:28:03 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:12.678 06:28:03 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:12.678 06:28:03 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:12.678 06:28:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.678 06:28:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.678 06:28:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.678 06:28:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.678 06:28:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.678 06:28:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:12.678 06:28:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.678 06:28:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n2 00:04:12.678 06:28:03 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:04:12.678 06:28:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.678 06:28:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n3 00:04:12.678 06:28:03 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:04:12.678 06:28:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:12.678 06:28:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.678 06:28:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:12.678 06:28:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.678 06:28:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.678 06:28:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:12.678 06:28:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:12.678 06:28:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.678 No valid GPT data, bailing 00:04:12.678 06:28:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.678 06:28:03 -- scripts/common.sh@394 -- # pt= 00:04:12.678 06:28:03 -- scripts/common.sh@395 -- # return 1 00:04:12.678 06:28:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.678 1+0 records in 00:04:12.678 1+0 records out 00:04:12.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108006 s, 97.1 MB/s 00:04:12.678 06:28:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.678 06:28:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.678 06:28:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:12.678 06:28:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:12.678 06:28:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:12.678 No valid GPT data, bailing 00:04:12.679 06:28:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:12.679 06:28:03 -- scripts/common.sh@394 -- # pt= 00:04:12.679 06:28:03 -- scripts/common.sh@395 -- # return 1 00:04:12.679 06:28:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:12.679 1+0 records in 00:04:12.679 1+0 records out 00:04:12.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436828 s, 240 MB/s 00:04:12.679 06:28:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.679 06:28:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.679 06:28:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:12.679 06:28:03 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:12.679 06:28:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:12.679 No valid GPT data, bailing 00:04:12.679 06:28:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:12.679 06:28:03 -- scripts/common.sh@394 -- # pt= 00:04:12.679 06:28:03 -- scripts/common.sh@395 -- # return 1 00:04:12.679 06:28:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:12.679 1+0 records in 00:04:12.679 1+0 records out 00:04:12.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369901 s, 283 MB/s 00:04:12.679 06:28:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.679 06:28:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.679 06:28:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:12.679 06:28:03 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:12.679 06:28:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:12.679 No valid GPT data, bailing 00:04:12.679 06:28:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:12.679 06:28:03 -- scripts/common.sh@394 -- # pt= 00:04:12.679 06:28:03 -- scripts/common.sh@395 -- # return 1 00:04:12.679 06:28:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:12.679 1+0 records in 00:04:12.679 1+0 records out 00:04:12.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00701251 s, 150 MB/s 00:04:12.679 06:28:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.679 06:28:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.679 06:28:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:04:12.679 06:28:03 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:04:12.679 06:28:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:12.679 No valid GPT data, bailing 00:04:12.679 06:28:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:12.679 06:28:04 -- scripts/common.sh@394 -- # pt= 00:04:12.679 06:28:04 -- scripts/common.sh@395 -- # return 1 00:04:12.679 06:28:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:12.679 1+0 records in 00:04:12.679 1+0 records out 00:04:12.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00335511 s, 313 MB/s 00:04:12.679 06:28:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.679 06:28:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.679 06:28:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:04:12.679 06:28:04 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:04:12.679 06:28:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:12.679 No valid GPT data, bailing 00:04:12.679 06:28:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:12.679 06:28:04 -- scripts/common.sh@394 -- # pt= 00:04:12.679 06:28:04 -- scripts/common.sh@395 -- # return 1 00:04:12.679 06:28:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:12.679 1+0 records in 00:04:12.679 1+0 records out 00:04:12.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041107 s, 255 MB/s 00:04:12.679 06:28:04 -- spdk/autotest.sh@105 -- # sync 00:04:12.679 06:28:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.679 06:28:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.679 06:28:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.054 06:28:05 -- spdk/autotest.sh@111 -- # uname -s 00:04:14.313 06:28:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:14.313 06:28:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:14.313 06:28:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:14.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.830 Hugepages 00:04:14.830 node hugesize free / total 00:04:14.830 node0 1048576kB 0 / 0 00:04:14.830 node0 2048kB 0 / 0 00:04:14.830 00:04:14.830 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.088 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:15.088 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:15.088 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:15.088 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:15.347 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:15.347 06:28:07 -- spdk/autotest.sh@117 -- # uname -s 00:04:15.347 06:28:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:15.347 06:28:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:15.347 06:28:07 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.174 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.174 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.174 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.174 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.174 06:28:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:17.551 06:28:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:17.551 06:28:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:17.551 06:28:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:17.551 06:28:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:17.551 06:28:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:17.551 06:28:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:17.551 06:28:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:17.551 06:28:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:17.551 06:28:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:17.551 06:28:09 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:17.551 06:28:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:17.551 06:28:09 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.812 Waiting for block devices as requested 00:04:17.812 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.073 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.073 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.073 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.405 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:23.405 06:28:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:23.405 06:28:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:23.405 06:28:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.405 06:28:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:23.405 06:28:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:23.406 06:28:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:23.406 06:28:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:23.406 06:28:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:23.406 06:28:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:23.406 06:28:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:23.406 06:28:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:23.406 06:28:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:23.406 06:28:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1543 -- # continue 00:04:23.406 06:28:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:23.406 06:28:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:23.406 06:28:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.406 06:28:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:23.406 06:28:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:23.406 06:28:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:23.406 06:28:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:23.406 06:28:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:23.406 06:28:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:23.406 06:28:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:23.406 06:28:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:23.406 06:28:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1543 -- # continue 00:04:23.406 06:28:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:23.406 06:28:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:23.406 06:28:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.406 06:28:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:23.406 06:28:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:23.406 06:28:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:23.406 06:28:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:23.406 06:28:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1543 -- # continue 00:04:23.406 06:28:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:23.406 06:28:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:23.406 06:28:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.406 06:28:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:23.406 06:28:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:23.406 06:28:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:23.406 06:28:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:23.406 06:28:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:23.406 06:28:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:23.406 06:28:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:23.406 06:28:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:23.406 06:28:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:23.406 06:28:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:23.406 06:28:15 -- common/autotest_common.sh@1543 -- # continue 00:04:23.406 06:28:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:23.406 06:28:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.406 06:28:15 -- common/autotest_common.sh@10 -- # set +x 00:04:23.406 06:28:15 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:23.406 06:28:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.406 06:28:15 -- common/autotest_common.sh@10 -- # set +x 00:04:23.406 06:28:15 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.551 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.551 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.551 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.551 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.551 06:28:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:24.551 06:28:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.551 06:28:16 -- common/autotest_common.sh@10 -- # set +x 00:04:24.551 06:28:16 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:24.551 06:28:16 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:24.551 06:28:16 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.551 06:28:16 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:24.551 06:28:16 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:24.551 06:28:16 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:24.551 06:28:16 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:24.551 06:28:16 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:24.551 06:28:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:24.551 06:28:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:24.551 06:28:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.551 06:28:16 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:24.551 06:28:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:24.813 06:28:16 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:24.813 06:28:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:24.813 06:28:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:24.813 06:28:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:24.813 06:28:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:24.813 06:28:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.813 06:28:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:24.813 06:28:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:24.813 06:28:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:24.813 06:28:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.813 06:28:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:24.813 06:28:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:24.813 06:28:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:24.813 06:28:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.813 06:28:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:24.813 06:28:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:24.813 06:28:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:24.813 06:28:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.813 06:28:16 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:24.813 06:28:16 -- common/autotest_common.sh@1572 -- # return 0 00:04:24.813 06:28:16 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:24.813 06:28:16 -- common/autotest_common.sh@1580 -- # return 0 00:04:24.813 06:28:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:24.813 06:28:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:24.813 06:28:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:24.813 06:28:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:24.813 06:28:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:24.813 06:28:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.813 06:28:16 -- common/autotest_common.sh@10 -- # set +x 00:04:24.813 06:28:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:24.813 06:28:16 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:24.813 06:28:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.813 06:28:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.813 06:28:16 -- common/autotest_common.sh@10 -- # set +x 00:04:24.813 ************************************ 00:04:24.813 START TEST env 00:04:24.813 ************************************ 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:24.813 * Looking for test storage... 00:04:24.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.813 06:28:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.813 06:28:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.813 06:28:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.813 06:28:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.813 06:28:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.813 06:28:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.813 06:28:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.813 06:28:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.813 06:28:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.813 06:28:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.813 06:28:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.813 06:28:16 env -- scripts/common.sh@344 -- # case "$op" in 00:04:24.813 06:28:16 env -- scripts/common.sh@345 -- # : 1 00:04:24.813 06:28:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.813 06:28:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.813 06:28:16 env -- scripts/common.sh@365 -- # decimal 1 00:04:24.813 06:28:16 env -- scripts/common.sh@353 -- # local d=1 00:04:24.813 06:28:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.813 06:28:16 env -- scripts/common.sh@355 -- # echo 1 00:04:24.813 06:28:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.813 06:28:16 env -- scripts/common.sh@366 -- # decimal 2 00:04:24.813 06:28:16 env -- scripts/common.sh@353 -- # local d=2 00:04:24.813 06:28:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.813 06:28:16 env -- scripts/common.sh@355 -- # echo 2 00:04:24.813 06:28:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.813 06:28:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.813 06:28:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.813 06:28:16 env -- scripts/common.sh@368 -- # return 0 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.813 --rc genhtml_branch_coverage=1 00:04:24.813 --rc genhtml_function_coverage=1 00:04:24.813 --rc genhtml_legend=1 00:04:24.813 --rc geninfo_all_blocks=1 00:04:24.813 --rc geninfo_unexecuted_blocks=1 00:04:24.813 00:04:24.813 ' 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.813 --rc genhtml_branch_coverage=1 00:04:24.813 --rc genhtml_function_coverage=1 00:04:24.813 --rc genhtml_legend=1 00:04:24.813 --rc geninfo_all_blocks=1 00:04:24.813 --rc geninfo_unexecuted_blocks=1 00:04:24.813 00:04:24.813 ' 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.813 --rc genhtml_branch_coverage=1 00:04:24.813 --rc genhtml_function_coverage=1 00:04:24.813 --rc genhtml_legend=1 00:04:24.813 --rc geninfo_all_blocks=1 00:04:24.813 --rc geninfo_unexecuted_blocks=1 00:04:24.813 00:04:24.813 ' 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.813 --rc genhtml_branch_coverage=1 00:04:24.813 --rc genhtml_function_coverage=1 00:04:24.813 --rc genhtml_legend=1 00:04:24.813 --rc geninfo_all_blocks=1 00:04:24.813 --rc geninfo_unexecuted_blocks=1 00:04:24.813 00:04:24.813 ' 00:04:24.813 06:28:16 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.813 06:28:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.813 06:28:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.813 ************************************ 00:04:24.813 START TEST env_memory 00:04:24.813 ************************************ 00:04:24.813 06:28:16 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:24.813 00:04:24.813 00:04:24.813 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.813 http://cunit.sourceforge.net/ 00:04:24.813 00:04:24.813 00:04:24.813 Suite: memory 00:04:25.075 Test: alloc and free memory map ...[2024-11-19 06:28:16.769343] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:25.075 passed 00:04:25.075 Test: mem map translation ...[2024-11-19 06:28:16.809568] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:25.075 [2024-11-19 06:28:16.809797] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:25.075 [2024-11-19 06:28:16.810336] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:25.075 [2024-11-19 06:28:16.810560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:25.075 passed 00:04:25.075 Test: mem map registration ...[2024-11-19 06:28:16.879595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:25.075 [2024-11-19 06:28:16.879819] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:25.075 passed 00:04:25.075 Test: mem map adjacent registrations ...passed 00:04:25.075 00:04:25.075 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.075 suites 1 1 n/a 0 0 00:04:25.075 tests 4 4 4 0 0 00:04:25.075 asserts 152 152 152 0 n/a 00:04:25.075 00:04:25.075 Elapsed time = 0.245 seconds 00:04:25.075 00:04:25.075 real 0m0.281s 00:04:25.075 user 0m0.257s 00:04:25.075 sys 0m0.012s 00:04:25.075 06:28:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.075 ************************************ 00:04:25.075 END TEST env_memory 00:04:25.075 ************************************ 00:04:25.075 06:28:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:25.337 06:28:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:25.337 06:28:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.337 06:28:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.337 06:28:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.337 ************************************ 00:04:25.337 START TEST env_vtophys 00:04:25.337 ************************************ 00:04:25.337 06:28:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:25.337 EAL: lib.eal log level changed from notice to debug 00:04:25.337 EAL: Detected lcore 0 as core 0 on socket 0 00:04:25.337 EAL: Detected lcore 1 as core 0 on socket 0 00:04:25.337 EAL: Detected lcore 2 as core 0 on socket 0 00:04:25.337 EAL: Detected lcore 3 as core 0 on socket 0 00:04:25.337 EAL: Detected lcore 4 as core 0 on socket 0 00:04:25.337 EAL: Detected lcore 5 as core 0 on socket 0 00:04:25.337 EAL: Detected lcore 6 as core 0 on socket 0 00:04:25.337 EAL: Detected lcore 7 as core 0 on socket 0 00:04:25.337 EAL: Detected lcore 8 as core 0 on socket 0 00:04:25.337 EAL: Detected lcore 9 as core 0 on socket 0 00:04:25.337 EAL: Maximum logical cores by configuration: 128 00:04:25.337 EAL: Detected CPU lcores: 10 00:04:25.337 EAL: Detected NUMA nodes: 1 00:04:25.337 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:25.337 EAL: Detected shared linkage of DPDK 00:04:25.337 EAL: No shared files mode enabled, IPC will be disabled 00:04:25.337 EAL: Selected IOVA mode 'PA' 00:04:25.337 EAL: Probing VFIO support... 00:04:25.337 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:25.337 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:25.337 EAL: Ask a virtual area of 0x2e000 bytes 00:04:25.337 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:25.337 EAL: Setting up physically contiguous memory... 00:04:25.337 EAL: Setting maximum number of open files to 524288 00:04:25.337 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:25.337 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:25.337 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.337 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:25.337 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.337 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.337 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:25.337 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:25.337 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.337 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:25.337 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.337 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.337 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:25.337 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:25.337 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.337 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:25.337 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.337 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.337 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:25.337 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:25.337 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.337 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:25.337 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.337 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.337 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:25.337 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:25.337 EAL: Hugepages will be freed exactly as allocated. 00:04:25.337 EAL: No shared files mode enabled, IPC is disabled 00:04:25.337 EAL: No shared files mode enabled, IPC is disabled 00:04:25.337 EAL: TSC frequency is ~2600000 KHz 00:04:25.337 EAL: Main lcore 0 is ready (tid=7fc2e0ebaa40;cpuset=[0]) 00:04:25.337 EAL: Trying to obtain current memory policy. 00:04:25.337 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.337 EAL: Restoring previous memory policy: 0 00:04:25.337 EAL: request: mp_malloc_sync 00:04:25.337 EAL: No shared files mode enabled, IPC is disabled 00:04:25.337 EAL: Heap on socket 0 was expanded by 2MB 00:04:25.338 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:25.338 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:25.338 EAL: Mem event callback 'spdk:(nil)' registered 00:04:25.338 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:25.338 00:04:25.338 00:04:25.338 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.338 http://cunit.sourceforge.net/ 00:04:25.338 00:04:25.338 00:04:25.338 Suite: components_suite 00:04:25.910 Test: vtophys_malloc_test ...passed 00:04:25.910 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:25.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.910 EAL: Restoring previous memory policy: 4 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was expanded by 4MB 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was shrunk by 4MB 00:04:25.910 EAL: Trying to obtain current memory policy. 00:04:25.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.910 EAL: Restoring previous memory policy: 4 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was expanded by 6MB 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was shrunk by 6MB 00:04:25.910 EAL: Trying to obtain current memory policy. 00:04:25.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.910 EAL: Restoring previous memory policy: 4 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was expanded by 10MB 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was shrunk by 10MB 00:04:25.910 EAL: Trying to obtain current memory policy. 00:04:25.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.910 EAL: Restoring previous memory policy: 4 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was expanded by 18MB 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was shrunk by 18MB 00:04:25.910 EAL: Trying to obtain current memory policy. 00:04:25.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.910 EAL: Restoring previous memory policy: 4 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was expanded by 34MB 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was shrunk by 34MB 00:04:25.910 EAL: Trying to obtain current memory policy. 00:04:25.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.910 EAL: Restoring previous memory policy: 4 00:04:25.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.910 EAL: request: mp_malloc_sync 00:04:25.910 EAL: No shared files mode enabled, IPC is disabled 00:04:25.910 EAL: Heap on socket 0 was expanded by 66MB 00:04:26.170 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.170 EAL: request: mp_malloc_sync 00:04:26.170 EAL: No shared files mode enabled, IPC is disabled 00:04:26.170 EAL: Heap on socket 0 was shrunk by 66MB 00:04:26.170 EAL: Trying to obtain current memory policy. 00:04:26.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.170 EAL: Restoring previous memory policy: 4 00:04:26.170 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.170 EAL: request: mp_malloc_sync 00:04:26.170 EAL: No shared files mode enabled, IPC is disabled 00:04:26.170 EAL: Heap on socket 0 was expanded by 130MB 00:04:26.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.430 EAL: request: mp_malloc_sync 00:04:26.430 EAL: No shared files mode enabled, IPC is disabled 00:04:26.430 EAL: Heap on socket 0 was shrunk by 130MB 00:04:26.430 EAL: Trying to obtain current memory policy. 00:04:26.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.691 EAL: Restoring previous memory policy: 4 00:04:26.691 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.691 EAL: request: mp_malloc_sync 00:04:26.691 EAL: No shared files mode enabled, IPC is disabled 00:04:26.691 EAL: Heap on socket 0 was expanded by 258MB 00:04:26.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.953 EAL: request: mp_malloc_sync 00:04:26.953 EAL: No shared files mode enabled, IPC is disabled 00:04:26.953 EAL: Heap on socket 0 was shrunk by 258MB 00:04:27.215 EAL: Trying to obtain current memory policy. 00:04:27.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.475 EAL: Restoring previous memory policy: 4 00:04:27.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.475 EAL: request: mp_malloc_sync 00:04:27.475 EAL: No shared files mode enabled, IPC is disabled 00:04:27.475 EAL: Heap on socket 0 was expanded by 514MB 00:04:28.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.043 EAL: request: mp_malloc_sync 00:04:28.043 EAL: No shared files mode enabled, IPC is disabled 00:04:28.043 EAL: Heap on socket 0 was shrunk by 514MB 00:04:28.617 EAL: Trying to obtain current memory policy. 00:04:28.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.617 EAL: Restoring previous memory policy: 4 00:04:28.617 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.617 EAL: request: mp_malloc_sync 00:04:28.617 EAL: No shared files mode enabled, IPC is disabled 00:04:28.617 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.826 EAL: request: mp_malloc_sync 00:04:29.826 EAL: No shared files mode enabled, IPC is disabled 00:04:29.826 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:30.765 passed 00:04:30.765 00:04:30.765 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.765 suites 1 1 n/a 0 0 00:04:30.765 tests 2 2 2 0 0 00:04:30.765 asserts 5978 5978 5978 0 n/a 00:04:30.765 00:04:30.765 Elapsed time = 5.041 seconds 00:04:30.765 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.765 EAL: request: mp_malloc_sync 00:04:30.765 EAL: No shared files mode enabled, IPC is disabled 00:04:30.765 EAL: Heap on socket 0 was shrunk by 2MB 00:04:30.765 EAL: No shared files mode enabled, IPC is disabled 00:04:30.765 EAL: No shared files mode enabled, IPC is disabled 00:04:30.765 EAL: No shared files mode enabled, IPC is disabled 00:04:30.765 00:04:30.766 real 0m5.332s 00:04:30.766 user 0m4.208s 00:04:30.766 sys 0m0.962s 00:04:30.766 06:28:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.766 06:28:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:30.766 ************************************ 00:04:30.766 END TEST env_vtophys 00:04:30.766 ************************************ 00:04:30.766 06:28:22 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:30.766 06:28:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.766 06:28:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.766 06:28:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.766 ************************************ 00:04:30.766 START TEST env_pci 00:04:30.766 ************************************ 00:04:30.766 06:28:22 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:30.766 00:04:30.766 00:04:30.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.766 http://cunit.sourceforge.net/ 00:04:30.766 00:04:30.766 00:04:30.766 Suite: pci 00:04:30.766 Test: pci_hook ...[2024-11-19 06:28:22.481631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57062 has claimed it 00:04:30.766 passed 00:04:30.766 00:04:30.766 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.766 suites 1 1 n/a 0 0 00:04:30.766 tests 1 1 1 0 0 00:04:30.766 asserts 25 25 25 0 n/a 00:04:30.766 00:04:30.766 Elapsed time = 0.007 seconds 00:04:30.766 EAL: Cannot find device (10000:00:01.0) 00:04:30.766 EAL: Failed to attach device on primary process 00:04:30.766 00:04:30.766 real 0m0.065s 00:04:30.766 user 0m0.032s 00:04:30.766 sys 0m0.032s 00:04:30.766 06:28:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.766 06:28:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:30.766 ************************************ 00:04:30.766 END TEST env_pci 00:04:30.766 ************************************ 00:04:30.766 06:28:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:30.766 06:28:22 env -- env/env.sh@15 -- # uname 00:04:30.766 06:28:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:30.766 06:28:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:30.766 06:28:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.766 06:28:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:30.766 06:28:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.766 06:28:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.766 ************************************ 00:04:30.766 START TEST env_dpdk_post_init 00:04:30.766 ************************************ 00:04:30.766 06:28:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.766 EAL: Detected CPU lcores: 10 00:04:30.766 EAL: Detected NUMA nodes: 1 00:04:30.766 EAL: Detected shared linkage of DPDK 00:04:30.766 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.766 EAL: Selected IOVA mode 'PA' 00:04:31.028 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.028 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:31.028 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:31.028 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:31.028 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:31.028 Starting DPDK initialization... 00:04:31.028 Starting SPDK post initialization... 00:04:31.028 SPDK NVMe probe 00:04:31.028 Attaching to 0000:00:10.0 00:04:31.028 Attaching to 0000:00:11.0 00:04:31.028 Attaching to 0000:00:12.0 00:04:31.028 Attaching to 0000:00:13.0 00:04:31.028 Attached to 0000:00:11.0 00:04:31.028 Attached to 0000:00:13.0 00:04:31.028 Attached to 0000:00:10.0 00:04:31.028 Attached to 0000:00:12.0 00:04:31.028 Cleaning up... 00:04:31.028 ************************************ 00:04:31.028 END TEST env_dpdk_post_init 00:04:31.028 ************************************ 00:04:31.028 00:04:31.028 real 0m0.255s 00:04:31.028 user 0m0.080s 00:04:31.028 sys 0m0.077s 00:04:31.028 06:28:22 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.028 06:28:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.028 06:28:22 env -- env/env.sh@26 -- # uname 00:04:31.028 06:28:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:31.028 06:28:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.029 06:28:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.029 06:28:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.029 06:28:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.029 ************************************ 00:04:31.029 START TEST env_mem_callbacks 00:04:31.029 ************************************ 00:04:31.029 06:28:22 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.029 EAL: Detected CPU lcores: 10 00:04:31.029 EAL: Detected NUMA nodes: 1 00:04:31.029 EAL: Detected shared linkage of DPDK 00:04:31.029 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.029 EAL: Selected IOVA mode 'PA' 00:04:31.289 00:04:31.289 00:04:31.289 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.289 http://cunit.sourceforge.net/ 00:04:31.289 00:04:31.289 00:04:31.289 Suite: memory 00:04:31.289 Test: test ... 00:04:31.289 register 0x200000200000 2097152 00:04:31.289 malloc 3145728 00:04:31.289 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.289 register 0x200000400000 4194304 00:04:31.289 buf 0x2000004fffc0 len 3145728 PASSED 00:04:31.289 malloc 64 00:04:31.289 buf 0x2000004ffec0 len 64 PASSED 00:04:31.289 malloc 4194304 00:04:31.289 register 0x200000800000 6291456 00:04:31.289 buf 0x2000009fffc0 len 4194304 PASSED 00:04:31.289 free 0x2000004fffc0 3145728 00:04:31.289 free 0x2000004ffec0 64 00:04:31.289 unregister 0x200000400000 4194304 PASSED 00:04:31.289 free 0x2000009fffc0 4194304 00:04:31.289 unregister 0x200000800000 6291456 PASSED 00:04:31.289 malloc 8388608 00:04:31.289 register 0x200000400000 10485760 00:04:31.289 buf 0x2000005fffc0 len 8388608 PASSED 00:04:31.289 free 0x2000005fffc0 8388608 00:04:31.289 unregister 0x200000400000 10485760 PASSED 00:04:31.289 passed 00:04:31.289 00:04:31.289 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.289 suites 1 1 n/a 0 0 00:04:31.289 tests 1 1 1 0 0 00:04:31.289 asserts 15 15 15 0 n/a 00:04:31.289 00:04:31.289 Elapsed time = 0.046 seconds 00:04:31.289 00:04:31.289 real 0m0.224s 00:04:31.289 user 0m0.067s 00:04:31.289 sys 0m0.054s 00:04:31.289 06:28:23 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.289 06:28:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:31.289 ************************************ 00:04:31.289 END TEST env_mem_callbacks 00:04:31.289 ************************************ 00:04:31.289 00:04:31.289 real 0m6.624s 00:04:31.289 user 0m4.804s 00:04:31.289 sys 0m1.355s 00:04:31.289 06:28:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.289 ************************************ 00:04:31.289 END TEST env 00:04:31.289 06:28:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.289 ************************************ 00:04:31.289 06:28:23 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:31.289 06:28:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.289 06:28:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.289 06:28:23 -- common/autotest_common.sh@10 -- # set +x 00:04:31.552 ************************************ 00:04:31.552 START TEST rpc 00:04:31.552 ************************************ 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:31.552 * Looking for test storage... 00:04:31.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.552 06:28:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.552 06:28:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.552 06:28:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.552 06:28:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.552 06:28:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.552 06:28:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.552 06:28:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.552 06:28:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.552 06:28:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.552 06:28:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.552 06:28:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.552 06:28:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.552 06:28:23 rpc -- scripts/common.sh@345 -- # : 1 00:04:31.552 06:28:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.552 06:28:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.552 06:28:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.552 06:28:23 rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.552 06:28:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.552 06:28:23 rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.552 06:28:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.552 06:28:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:31.552 06:28:23 rpc -- scripts/common.sh@353 -- # local d=2 00:04:31.552 06:28:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.552 06:28:23 rpc -- scripts/common.sh@355 -- # echo 2 00:04:31.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.552 06:28:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.552 06:28:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.552 06:28:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.552 06:28:23 rpc -- scripts/common.sh@368 -- # return 0 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.552 --rc genhtml_branch_coverage=1 00:04:31.552 --rc genhtml_function_coverage=1 00:04:31.552 --rc genhtml_legend=1 00:04:31.552 --rc geninfo_all_blocks=1 00:04:31.552 --rc geninfo_unexecuted_blocks=1 00:04:31.552 00:04:31.552 ' 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.552 --rc genhtml_branch_coverage=1 00:04:31.552 --rc genhtml_function_coverage=1 00:04:31.552 --rc genhtml_legend=1 00:04:31.552 --rc geninfo_all_blocks=1 00:04:31.552 --rc geninfo_unexecuted_blocks=1 00:04:31.552 00:04:31.552 ' 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.552 --rc genhtml_branch_coverage=1 00:04:31.552 --rc genhtml_function_coverage=1 00:04:31.552 --rc genhtml_legend=1 00:04:31.552 --rc geninfo_all_blocks=1 00:04:31.552 --rc geninfo_unexecuted_blocks=1 00:04:31.552 00:04:31.552 ' 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.552 --rc genhtml_branch_coverage=1 00:04:31.552 --rc genhtml_function_coverage=1 00:04:31.552 --rc genhtml_legend=1 00:04:31.552 --rc geninfo_all_blocks=1 00:04:31.552 --rc geninfo_unexecuted_blocks=1 00:04:31.552 00:04:31.552 ' 00:04:31.552 06:28:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57183 00:04:31.552 06:28:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.552 06:28:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57183 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@835 -- # '[' -z 57183 ']' 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.552 06:28:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.552 06:28:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:31.552 [2024-11-19 06:28:23.482628] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:31.814 [2024-11-19 06:28:23.483035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57183 ] 00:04:31.814 [2024-11-19 06:28:23.654079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.075 [2024-11-19 06:28:23.806217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.075 [2024-11-19 06:28:23.806297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57183' to capture a snapshot of events at runtime. 00:04:32.075 [2024-11-19 06:28:23.806310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:32.075 [2024-11-19 06:28:23.806322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:32.075 [2024-11-19 06:28:23.806330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57183 for offline analysis/debug. 00:04:32.075 [2024-11-19 06:28:23.807383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.019 06:28:24 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.019 06:28:24 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.019 06:28:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.019 06:28:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.019 06:28:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.019 06:28:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.019 06:28:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.019 06:28:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.019 06:28:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.019 ************************************ 00:04:33.019 START TEST rpc_integrity 00:04:33.019 ************************************ 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.019 { 00:04:33.019 "name": "Malloc0", 00:04:33.019 "aliases": [ 00:04:33.019 "322c26c6-12bc-4a80-ba9a-53581790fc7c" 00:04:33.019 ], 00:04:33.019 "product_name": "Malloc disk", 00:04:33.019 "block_size": 512, 00:04:33.019 "num_blocks": 16384, 00:04:33.019 "uuid": "322c26c6-12bc-4a80-ba9a-53581790fc7c", 00:04:33.019 "assigned_rate_limits": { 00:04:33.019 "rw_ios_per_sec": 0, 00:04:33.019 "rw_mbytes_per_sec": 0, 00:04:33.019 "r_mbytes_per_sec": 0, 00:04:33.019 "w_mbytes_per_sec": 0 00:04:33.019 }, 00:04:33.019 "claimed": false, 00:04:33.019 "zoned": false, 00:04:33.019 "supported_io_types": { 00:04:33.019 "read": true, 00:04:33.019 "write": true, 00:04:33.019 "unmap": true, 00:04:33.019 "flush": true, 00:04:33.019 "reset": true, 00:04:33.019 "nvme_admin": false, 00:04:33.019 "nvme_io": false, 00:04:33.019 "nvme_io_md": false, 00:04:33.019 "write_zeroes": true, 00:04:33.019 "zcopy": true, 00:04:33.019 "get_zone_info": false, 00:04:33.019 "zone_management": false, 00:04:33.019 "zone_append": false, 00:04:33.019 "compare": false, 00:04:33.019 "compare_and_write": false, 00:04:33.019 "abort": true, 00:04:33.019 "seek_hole": false, 00:04:33.019 "seek_data": false, 00:04:33.019 "copy": true, 00:04:33.019 "nvme_iov_md": false 00:04:33.019 }, 00:04:33.019 "memory_domains": [ 00:04:33.019 { 00:04:33.019 "dma_device_id": "system", 00:04:33.019 "dma_device_type": 1 00:04:33.019 }, 00:04:33.019 { 00:04:33.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.019 "dma_device_type": 2 00:04:33.019 } 00:04:33.019 ], 00:04:33.019 "driver_specific": {} 00:04:33.019 } 00:04:33.019 ]' 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.019 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.019 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.019 [2024-11-19 06:28:24.743095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.019 [2024-11-19 06:28:24.743351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.019 [2024-11-19 06:28:24.743397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:33.019 [2024-11-19 06:28:24.743412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.019 [2024-11-19 06:28:24.746185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.020 [2024-11-19 06:28:24.746249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.020 Passthru0 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.020 { 00:04:33.020 "name": "Malloc0", 00:04:33.020 "aliases": [ 00:04:33.020 "322c26c6-12bc-4a80-ba9a-53581790fc7c" 00:04:33.020 ], 00:04:33.020 "product_name": "Malloc disk", 00:04:33.020 "block_size": 512, 00:04:33.020 "num_blocks": 16384, 00:04:33.020 "uuid": "322c26c6-12bc-4a80-ba9a-53581790fc7c", 00:04:33.020 "assigned_rate_limits": { 00:04:33.020 "rw_ios_per_sec": 0, 00:04:33.020 "rw_mbytes_per_sec": 0, 00:04:33.020 "r_mbytes_per_sec": 0, 00:04:33.020 "w_mbytes_per_sec": 0 00:04:33.020 }, 00:04:33.020 "claimed": true, 00:04:33.020 "claim_type": "exclusive_write", 00:04:33.020 "zoned": false, 00:04:33.020 "supported_io_types": { 00:04:33.020 "read": true, 00:04:33.020 "write": true, 00:04:33.020 "unmap": true, 00:04:33.020 "flush": true, 00:04:33.020 "reset": true, 00:04:33.020 "nvme_admin": false, 00:04:33.020 "nvme_io": false, 00:04:33.020 "nvme_io_md": false, 00:04:33.020 "write_zeroes": true, 00:04:33.020 "zcopy": true, 00:04:33.020 "get_zone_info": false, 00:04:33.020 "zone_management": false, 00:04:33.020 "zone_append": false, 00:04:33.020 "compare": false, 00:04:33.020 "compare_and_write": false, 00:04:33.020 "abort": true, 00:04:33.020 "seek_hole": false, 00:04:33.020 "seek_data": false, 00:04:33.020 "copy": true, 00:04:33.020 "nvme_iov_md": false 00:04:33.020 }, 00:04:33.020 "memory_domains": [ 00:04:33.020 { 00:04:33.020 "dma_device_id": "system", 00:04:33.020 "dma_device_type": 1 00:04:33.020 }, 00:04:33.020 { 00:04:33.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.020 "dma_device_type": 2 00:04:33.020 } 00:04:33.020 ], 00:04:33.020 "driver_specific": {} 00:04:33.020 }, 00:04:33.020 { 00:04:33.020 "name": "Passthru0", 00:04:33.020 "aliases": [ 00:04:33.020 "9b77bf6c-ee88-5c50-98c1-bd636bc57e14" 00:04:33.020 ], 00:04:33.020 "product_name": "passthru", 00:04:33.020 "block_size": 512, 00:04:33.020 "num_blocks": 16384, 00:04:33.020 "uuid": "9b77bf6c-ee88-5c50-98c1-bd636bc57e14", 00:04:33.020 "assigned_rate_limits": { 00:04:33.020 "rw_ios_per_sec": 0, 00:04:33.020 "rw_mbytes_per_sec": 0, 00:04:33.020 "r_mbytes_per_sec": 0, 00:04:33.020 "w_mbytes_per_sec": 0 00:04:33.020 }, 00:04:33.020 "claimed": false, 00:04:33.020 "zoned": false, 00:04:33.020 "supported_io_types": { 00:04:33.020 "read": true, 00:04:33.020 "write": true, 00:04:33.020 "unmap": true, 00:04:33.020 "flush": true, 00:04:33.020 "reset": true, 00:04:33.020 "nvme_admin": false, 00:04:33.020 "nvme_io": false, 00:04:33.020 "nvme_io_md": false, 00:04:33.020 "write_zeroes": true, 00:04:33.020 "zcopy": true, 00:04:33.020 "get_zone_info": false, 00:04:33.020 "zone_management": false, 00:04:33.020 "zone_append": false, 00:04:33.020 "compare": false, 00:04:33.020 "compare_and_write": false, 00:04:33.020 "abort": true, 00:04:33.020 "seek_hole": false, 00:04:33.020 "seek_data": false, 00:04:33.020 "copy": true, 00:04:33.020 "nvme_iov_md": false 00:04:33.020 }, 00:04:33.020 "memory_domains": [ 00:04:33.020 { 00:04:33.020 "dma_device_id": "system", 00:04:33.020 "dma_device_type": 1 00:04:33.020 }, 00:04:33.020 { 00:04:33.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.020 "dma_device_type": 2 00:04:33.020 } 00:04:33.020 ], 00:04:33.020 "driver_specific": { 00:04:33.020 "passthru": { 00:04:33.020 "name": "Passthru0", 00:04:33.020 "base_bdev_name": "Malloc0" 00:04:33.020 } 00:04:33.020 } 00:04:33.020 } 00:04:33.020 ]' 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.020 ************************************ 00:04:33.020 END TEST rpc_integrity 00:04:33.020 ************************************ 00:04:33.020 06:28:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.020 00:04:33.020 real 0m0.253s 00:04:33.020 user 0m0.121s 00:04:33.020 sys 0m0.036s 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.020 06:28:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.020 06:28:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:33.020 06:28:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.020 06:28:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.020 06:28:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.020 ************************************ 00:04:33.020 START TEST rpc_plugins 00:04:33.020 ************************************ 00:04:33.020 06:28:24 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:33.020 06:28:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:33.020 06:28:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.020 06:28:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.020 06:28:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.020 06:28:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:33.279 06:28:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:33.279 06:28:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.279 06:28:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.279 06:28:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.279 06:28:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:33.279 { 00:04:33.279 "name": "Malloc1", 00:04:33.279 "aliases": [ 00:04:33.279 "242cb28d-a431-4983-be65-b8c1e79c03a5" 00:04:33.279 ], 00:04:33.279 "product_name": "Malloc disk", 00:04:33.279 "block_size": 4096, 00:04:33.279 "num_blocks": 256, 00:04:33.279 "uuid": "242cb28d-a431-4983-be65-b8c1e79c03a5", 00:04:33.279 "assigned_rate_limits": { 00:04:33.279 "rw_ios_per_sec": 0, 00:04:33.279 "rw_mbytes_per_sec": 0, 00:04:33.279 "r_mbytes_per_sec": 0, 00:04:33.279 "w_mbytes_per_sec": 0 00:04:33.279 }, 00:04:33.279 "claimed": false, 00:04:33.279 "zoned": false, 00:04:33.279 "supported_io_types": { 00:04:33.279 "read": true, 00:04:33.279 "write": true, 00:04:33.279 "unmap": true, 00:04:33.279 "flush": true, 00:04:33.279 "reset": true, 00:04:33.279 "nvme_admin": false, 00:04:33.279 "nvme_io": false, 00:04:33.279 "nvme_io_md": false, 00:04:33.279 "write_zeroes": true, 00:04:33.279 "zcopy": true, 00:04:33.279 "get_zone_info": false, 00:04:33.279 "zone_management": false, 00:04:33.279 "zone_append": false, 00:04:33.279 "compare": false, 00:04:33.279 "compare_and_write": false, 00:04:33.279 "abort": true, 00:04:33.279 "seek_hole": false, 00:04:33.279 "seek_data": false, 00:04:33.279 "copy": true, 00:04:33.279 "nvme_iov_md": false 00:04:33.279 }, 00:04:33.279 "memory_domains": [ 00:04:33.279 { 00:04:33.279 "dma_device_id": "system", 00:04:33.279 "dma_device_type": 1 00:04:33.279 }, 00:04:33.279 { 00:04:33.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.279 "dma_device_type": 2 00:04:33.279 } 00:04:33.279 ], 00:04:33.279 "driver_specific": {} 00:04:33.279 } 00:04:33.279 ]' 00:04:33.279 06:28:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:33.279 06:28:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:33.279 06:28:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:33.279 06:28:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.279 06:28:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.279 06:28:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.279 06:28:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:33.279 06:28:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.279 06:28:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.279 06:28:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.279 06:28:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:33.279 06:28:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:33.279 ************************************ 00:04:33.279 END TEST rpc_plugins 00:04:33.279 ************************************ 00:04:33.279 06:28:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:33.279 00:04:33.279 real 0m0.128s 00:04:33.279 user 0m0.073s 00:04:33.279 sys 0m0.018s 00:04:33.279 06:28:25 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.279 06:28:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.279 06:28:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:33.279 06:28:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.279 06:28:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.279 06:28:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.279 ************************************ 00:04:33.279 START TEST rpc_trace_cmd_test 00:04:33.279 ************************************ 00:04:33.279 06:28:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:33.279 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:33.279 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:33.279 06:28:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.279 06:28:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:33.279 06:28:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.279 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:33.279 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57183", 00:04:33.279 "tpoint_group_mask": "0x8", 00:04:33.279 "iscsi_conn": { 00:04:33.279 "mask": "0x2", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.279 "scsi": { 00:04:33.279 "mask": "0x4", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.279 "bdev": { 00:04:33.279 "mask": "0x8", 00:04:33.279 "tpoint_mask": "0xffffffffffffffff" 00:04:33.279 }, 00:04:33.279 "nvmf_rdma": { 00:04:33.279 "mask": "0x10", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.279 "nvmf_tcp": { 00:04:33.279 "mask": "0x20", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.279 "ftl": { 00:04:33.279 "mask": "0x40", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.279 "blobfs": { 00:04:33.279 "mask": "0x80", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.279 "dsa": { 00:04:33.279 "mask": "0x200", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.279 "thread": { 00:04:33.279 "mask": "0x400", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.279 "nvme_pcie": { 00:04:33.279 "mask": "0x800", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.279 "iaa": { 00:04:33.279 "mask": "0x1000", 00:04:33.279 "tpoint_mask": "0x0" 00:04:33.279 }, 00:04:33.280 "nvme_tcp": { 00:04:33.280 "mask": "0x2000", 00:04:33.280 "tpoint_mask": "0x0" 00:04:33.280 }, 00:04:33.280 "bdev_nvme": { 00:04:33.280 "mask": "0x4000", 00:04:33.280 "tpoint_mask": "0x0" 00:04:33.280 }, 00:04:33.280 "sock": { 00:04:33.280 "mask": "0x8000", 00:04:33.280 "tpoint_mask": "0x0" 00:04:33.280 }, 00:04:33.280 "blob": { 00:04:33.280 "mask": "0x10000", 00:04:33.280 "tpoint_mask": "0x0" 00:04:33.280 }, 00:04:33.280 "bdev_raid": { 00:04:33.280 "mask": "0x20000", 00:04:33.280 "tpoint_mask": "0x0" 00:04:33.280 }, 00:04:33.280 "scheduler": { 00:04:33.280 "mask": "0x40000", 00:04:33.280 "tpoint_mask": "0x0" 00:04:33.280 } 00:04:33.280 }' 00:04:33.280 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:33.280 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:33.280 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:33.280 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:33.280 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:33.539 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:33.539 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:33.539 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:33.539 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:33.539 ************************************ 00:04:33.539 END TEST rpc_trace_cmd_test 00:04:33.539 ************************************ 00:04:33.539 06:28:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:33.539 00:04:33.539 real 0m0.167s 00:04:33.539 user 0m0.134s 00:04:33.539 sys 0m0.024s 00:04:33.539 06:28:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.539 06:28:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 06:28:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:33.539 06:28:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:33.539 06:28:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:33.539 06:28:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.539 06:28:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.539 06:28:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 ************************************ 00:04:33.539 START TEST rpc_daemon_integrity 00:04:33.539 ************************************ 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.539 { 00:04:33.539 "name": "Malloc2", 00:04:33.539 "aliases": [ 00:04:33.539 "ffc7cfdc-e67e-4dc7-98a8-363eae924c5a" 00:04:33.539 ], 00:04:33.539 "product_name": "Malloc disk", 00:04:33.539 "block_size": 512, 00:04:33.539 "num_blocks": 16384, 00:04:33.539 "uuid": "ffc7cfdc-e67e-4dc7-98a8-363eae924c5a", 00:04:33.539 "assigned_rate_limits": { 00:04:33.539 "rw_ios_per_sec": 0, 00:04:33.539 "rw_mbytes_per_sec": 0, 00:04:33.539 "r_mbytes_per_sec": 0, 00:04:33.539 "w_mbytes_per_sec": 0 00:04:33.539 }, 00:04:33.539 "claimed": false, 00:04:33.539 "zoned": false, 00:04:33.539 "supported_io_types": { 00:04:33.539 "read": true, 00:04:33.539 "write": true, 00:04:33.539 "unmap": true, 00:04:33.539 "flush": true, 00:04:33.539 "reset": true, 00:04:33.539 "nvme_admin": false, 00:04:33.539 "nvme_io": false, 00:04:33.539 "nvme_io_md": false, 00:04:33.539 "write_zeroes": true, 00:04:33.539 "zcopy": true, 00:04:33.539 "get_zone_info": false, 00:04:33.539 "zone_management": false, 00:04:33.539 "zone_append": false, 00:04:33.539 "compare": false, 00:04:33.539 "compare_and_write": false, 00:04:33.539 "abort": true, 00:04:33.539 "seek_hole": false, 00:04:33.539 "seek_data": false, 00:04:33.539 "copy": true, 00:04:33.539 "nvme_iov_md": false 00:04:33.539 }, 00:04:33.539 "memory_domains": [ 00:04:33.539 { 00:04:33.539 "dma_device_id": "system", 00:04:33.539 "dma_device_type": 1 00:04:33.539 }, 00:04:33.539 { 00:04:33.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.539 "dma_device_type": 2 00:04:33.539 } 00:04:33.539 ], 00:04:33.539 "driver_specific": {} 00:04:33.539 } 00:04:33.539 ]' 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 [2024-11-19 06:28:25.439571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:33.539 [2024-11-19 06:28:25.439619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.539 [2024-11-19 06:28:25.439637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:33.539 [2024-11-19 06:28:25.439647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.539 [2024-11-19 06:28:25.441509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.539 [2024-11-19 06:28:25.441630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.539 Passthru0 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.539 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.539 { 00:04:33.539 "name": "Malloc2", 00:04:33.539 "aliases": [ 00:04:33.539 "ffc7cfdc-e67e-4dc7-98a8-363eae924c5a" 00:04:33.539 ], 00:04:33.539 "product_name": "Malloc disk", 00:04:33.539 "block_size": 512, 00:04:33.539 "num_blocks": 16384, 00:04:33.539 "uuid": "ffc7cfdc-e67e-4dc7-98a8-363eae924c5a", 00:04:33.539 "assigned_rate_limits": { 00:04:33.539 "rw_ios_per_sec": 0, 00:04:33.539 "rw_mbytes_per_sec": 0, 00:04:33.539 "r_mbytes_per_sec": 0, 00:04:33.539 "w_mbytes_per_sec": 0 00:04:33.539 }, 00:04:33.539 "claimed": true, 00:04:33.539 "claim_type": "exclusive_write", 00:04:33.539 "zoned": false, 00:04:33.539 "supported_io_types": { 00:04:33.539 "read": true, 00:04:33.539 "write": true, 00:04:33.539 "unmap": true, 00:04:33.539 "flush": true, 00:04:33.539 "reset": true, 00:04:33.539 "nvme_admin": false, 00:04:33.539 "nvme_io": false, 00:04:33.539 "nvme_io_md": false, 00:04:33.539 "write_zeroes": true, 00:04:33.539 "zcopy": true, 00:04:33.539 "get_zone_info": false, 00:04:33.539 "zone_management": false, 00:04:33.539 "zone_append": false, 00:04:33.539 "compare": false, 00:04:33.539 "compare_and_write": false, 00:04:33.539 "abort": true, 00:04:33.539 "seek_hole": false, 00:04:33.539 "seek_data": false, 00:04:33.539 "copy": true, 00:04:33.539 "nvme_iov_md": false 00:04:33.539 }, 00:04:33.539 "memory_domains": [ 00:04:33.539 { 00:04:33.539 "dma_device_id": "system", 00:04:33.539 "dma_device_type": 1 00:04:33.539 }, 00:04:33.539 { 00:04:33.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.539 "dma_device_type": 2 00:04:33.539 } 00:04:33.539 ], 00:04:33.539 "driver_specific": {} 00:04:33.539 }, 00:04:33.539 { 00:04:33.539 "name": "Passthru0", 00:04:33.539 "aliases": [ 00:04:33.539 "6f5d2a64-f215-5b05-a069-acc4ec4d7751" 00:04:33.539 ], 00:04:33.539 "product_name": "passthru", 00:04:33.539 "block_size": 512, 00:04:33.539 "num_blocks": 16384, 00:04:33.539 "uuid": "6f5d2a64-f215-5b05-a069-acc4ec4d7751", 00:04:33.539 "assigned_rate_limits": { 00:04:33.539 "rw_ios_per_sec": 0, 00:04:33.539 "rw_mbytes_per_sec": 0, 00:04:33.539 "r_mbytes_per_sec": 0, 00:04:33.539 "w_mbytes_per_sec": 0 00:04:33.539 }, 00:04:33.539 "claimed": false, 00:04:33.539 "zoned": false, 00:04:33.539 "supported_io_types": { 00:04:33.540 "read": true, 00:04:33.540 "write": true, 00:04:33.540 "unmap": true, 00:04:33.540 "flush": true, 00:04:33.540 "reset": true, 00:04:33.540 "nvme_admin": false, 00:04:33.540 "nvme_io": false, 00:04:33.540 "nvme_io_md": false, 00:04:33.540 "write_zeroes": true, 00:04:33.540 "zcopy": true, 00:04:33.540 "get_zone_info": false, 00:04:33.540 "zone_management": false, 00:04:33.540 "zone_append": false, 00:04:33.540 "compare": false, 00:04:33.540 "compare_and_write": false, 00:04:33.540 "abort": true, 00:04:33.540 "seek_hole": false, 00:04:33.540 "seek_data": false, 00:04:33.540 "copy": true, 00:04:33.540 "nvme_iov_md": false 00:04:33.540 }, 00:04:33.540 "memory_domains": [ 00:04:33.540 { 00:04:33.540 "dma_device_id": "system", 00:04:33.540 "dma_device_type": 1 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.540 "dma_device_type": 2 00:04:33.540 } 00:04:33.540 ], 00:04:33.540 "driver_specific": { 00:04:33.540 "passthru": { 00:04:33.540 "name": "Passthru0", 00:04:33.540 "base_bdev_name": "Malloc2" 00:04:33.540 } 00:04:33.540 } 00:04:33.540 } 00:04:33.540 ]' 00:04:33.540 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.798 ************************************ 00:04:33.798 END TEST rpc_daemon_integrity 00:04:33.798 ************************************ 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.798 00:04:33.798 real 0m0.233s 00:04:33.798 user 0m0.123s 00:04:33.798 sys 0m0.036s 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.798 06:28:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.798 06:28:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:33.798 06:28:25 rpc -- rpc/rpc.sh@84 -- # killprocess 57183 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@954 -- # '[' -z 57183 ']' 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@958 -- # kill -0 57183 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@959 -- # uname 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57183 00:04:33.798 killing process with pid 57183 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57183' 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@973 -- # kill 57183 00:04:33.798 06:28:25 rpc -- common/autotest_common.sh@978 -- # wait 57183 00:04:35.172 00:04:35.172 real 0m3.624s 00:04:35.172 user 0m3.827s 00:04:35.172 sys 0m0.848s 00:04:35.172 06:28:26 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.172 06:28:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.172 ************************************ 00:04:35.172 END TEST rpc 00:04:35.172 ************************************ 00:04:35.172 06:28:26 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.172 06:28:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.172 06:28:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.172 06:28:26 -- common/autotest_common.sh@10 -- # set +x 00:04:35.172 ************************************ 00:04:35.172 START TEST skip_rpc 00:04:35.172 ************************************ 00:04:35.172 06:28:26 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.172 * Looking for test storage... 00:04:35.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.172 06:28:26 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.172 06:28:26 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.172 06:28:26 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.172 06:28:27 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.172 06:28:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:35.172 06:28:27 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.172 06:28:27 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.172 --rc genhtml_branch_coverage=1 00:04:35.172 --rc genhtml_function_coverage=1 00:04:35.172 --rc genhtml_legend=1 00:04:35.172 --rc geninfo_all_blocks=1 00:04:35.172 --rc geninfo_unexecuted_blocks=1 00:04:35.172 00:04:35.172 ' 00:04:35.172 06:28:27 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.172 --rc genhtml_branch_coverage=1 00:04:35.172 --rc genhtml_function_coverage=1 00:04:35.172 --rc genhtml_legend=1 00:04:35.172 --rc geninfo_all_blocks=1 00:04:35.172 --rc geninfo_unexecuted_blocks=1 00:04:35.172 00:04:35.172 ' 00:04:35.172 06:28:27 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.172 --rc genhtml_branch_coverage=1 00:04:35.172 --rc genhtml_function_coverage=1 00:04:35.172 --rc genhtml_legend=1 00:04:35.172 --rc geninfo_all_blocks=1 00:04:35.172 --rc geninfo_unexecuted_blocks=1 00:04:35.172 00:04:35.172 ' 00:04:35.172 06:28:27 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.173 --rc genhtml_branch_coverage=1 00:04:35.173 --rc genhtml_function_coverage=1 00:04:35.173 --rc genhtml_legend=1 00:04:35.173 --rc geninfo_all_blocks=1 00:04:35.173 --rc geninfo_unexecuted_blocks=1 00:04:35.173 00:04:35.173 ' 00:04:35.173 06:28:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.173 06:28:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.173 06:28:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:35.173 06:28:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.173 06:28:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.173 06:28:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.173 ************************************ 00:04:35.173 START TEST skip_rpc 00:04:35.173 ************************************ 00:04:35.173 06:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:35.173 06:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57401 00:04:35.173 06:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.173 06:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:35.173 06:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:35.430 [2024-11-19 06:28:27.121658] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:35.430 [2024-11-19 06:28:27.121765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57401 ] 00:04:35.430 [2024-11-19 06:28:27.279274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.688 [2024-11-19 06:28:27.387107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57401 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57401 ']' 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57401 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57401 00:04:40.955 killing process with pid 57401 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57401' 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57401 00:04:40.955 06:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57401 00:04:41.526 ************************************ 00:04:41.526 END TEST skip_rpc 00:04:41.526 ************************************ 00:04:41.526 00:04:41.526 real 0m6.292s 00:04:41.526 user 0m5.867s 00:04:41.526 sys 0m0.322s 00:04:41.526 06:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.526 06:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.526 06:28:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:41.526 06:28:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.526 06:28:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.526 06:28:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.526 ************************************ 00:04:41.526 START TEST skip_rpc_with_json 00:04:41.526 ************************************ 00:04:41.526 06:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:41.526 06:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:41.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.526 06:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57494 00:04:41.526 06:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.527 06:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57494 00:04:41.527 06:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57494 ']' 00:04:41.527 06:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.527 06:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.527 06:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.527 06:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.527 06:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.527 06:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.787 [2024-11-19 06:28:33.478757] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:41.787 [2024-11-19 06:28:33.478872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57494 ] 00:04:41.787 [2024-11-19 06:28:33.634136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.046 [2024-11-19 06:28:33.725773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.614 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.615 [2024-11-19 06:28:34.274792] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.615 request: 00:04:42.615 { 00:04:42.615 "trtype": "tcp", 00:04:42.615 "method": "nvmf_get_transports", 00:04:42.615 "req_id": 1 00:04:42.615 } 00:04:42.615 Got JSON-RPC error response 00:04:42.615 response: 00:04:42.615 { 00:04:42.615 "code": -19, 00:04:42.615 "message": "No such device" 00:04:42.615 } 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.615 [2024-11-19 06:28:34.286886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.615 06:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.615 { 00:04:42.615 "subsystems": [ 00:04:42.615 { 00:04:42.615 "subsystem": "fsdev", 00:04:42.615 "config": [ 00:04:42.615 { 00:04:42.615 "method": "fsdev_set_opts", 00:04:42.615 "params": { 00:04:42.615 "fsdev_io_pool_size": 65535, 00:04:42.615 "fsdev_io_cache_size": 256 00:04:42.615 } 00:04:42.615 } 00:04:42.615 ] 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "subsystem": "keyring", 00:04:42.615 "config": [] 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "subsystem": "iobuf", 00:04:42.615 "config": [ 00:04:42.615 { 00:04:42.615 "method": "iobuf_set_options", 00:04:42.615 "params": { 00:04:42.615 "small_pool_count": 8192, 00:04:42.615 "large_pool_count": 1024, 00:04:42.615 "small_bufsize": 8192, 00:04:42.615 "large_bufsize": 135168, 00:04:42.615 "enable_numa": false 00:04:42.615 } 00:04:42.615 } 00:04:42.615 ] 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "subsystem": "sock", 00:04:42.615 "config": [ 00:04:42.615 { 00:04:42.615 "method": "sock_set_default_impl", 00:04:42.615 "params": { 00:04:42.615 "impl_name": "posix" 00:04:42.615 } 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "method": "sock_impl_set_options", 00:04:42.615 "params": { 00:04:42.615 "impl_name": "ssl", 00:04:42.615 "recv_buf_size": 4096, 00:04:42.615 "send_buf_size": 4096, 00:04:42.615 "enable_recv_pipe": true, 00:04:42.615 "enable_quickack": false, 00:04:42.615 "enable_placement_id": 0, 00:04:42.615 "enable_zerocopy_send_server": true, 00:04:42.615 "enable_zerocopy_send_client": false, 00:04:42.615 "zerocopy_threshold": 0, 00:04:42.615 "tls_version": 0, 00:04:42.615 "enable_ktls": false 00:04:42.615 } 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "method": "sock_impl_set_options", 00:04:42.615 "params": { 00:04:42.615 "impl_name": "posix", 00:04:42.615 "recv_buf_size": 2097152, 00:04:42.615 "send_buf_size": 2097152, 00:04:42.615 "enable_recv_pipe": true, 00:04:42.615 "enable_quickack": false, 00:04:42.615 "enable_placement_id": 0, 00:04:42.615 "enable_zerocopy_send_server": true, 00:04:42.615 "enable_zerocopy_send_client": false, 00:04:42.615 "zerocopy_threshold": 0, 00:04:42.615 "tls_version": 0, 00:04:42.615 "enable_ktls": false 00:04:42.615 } 00:04:42.615 } 00:04:42.615 ] 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "subsystem": "vmd", 00:04:42.615 "config": [] 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "subsystem": "accel", 00:04:42.615 "config": [ 00:04:42.615 { 00:04:42.615 "method": "accel_set_options", 00:04:42.615 "params": { 00:04:42.615 "small_cache_size": 128, 00:04:42.615 "large_cache_size": 16, 00:04:42.615 "task_count": 2048, 00:04:42.615 "sequence_count": 2048, 00:04:42.615 "buf_count": 2048 00:04:42.615 } 00:04:42.615 } 00:04:42.615 ] 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "subsystem": "bdev", 00:04:42.615 "config": [ 00:04:42.615 { 00:04:42.615 "method": "bdev_set_options", 00:04:42.615 "params": { 00:04:42.615 "bdev_io_pool_size": 65535, 00:04:42.615 "bdev_io_cache_size": 256, 00:04:42.615 "bdev_auto_examine": true, 00:04:42.615 "iobuf_small_cache_size": 128, 00:04:42.615 "iobuf_large_cache_size": 16 00:04:42.615 } 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "method": "bdev_raid_set_options", 00:04:42.615 "params": { 00:04:42.615 "process_window_size_kb": 1024, 00:04:42.615 "process_max_bandwidth_mb_sec": 0 00:04:42.615 } 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "method": "bdev_iscsi_set_options", 00:04:42.615 "params": { 00:04:42.615 "timeout_sec": 30 00:04:42.615 } 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "method": "bdev_nvme_set_options", 00:04:42.615 "params": { 00:04:42.615 "action_on_timeout": "none", 00:04:42.615 "timeout_us": 0, 00:04:42.615 "timeout_admin_us": 0, 00:04:42.615 "keep_alive_timeout_ms": 10000, 00:04:42.615 "arbitration_burst": 0, 00:04:42.615 "low_priority_weight": 0, 00:04:42.615 "medium_priority_weight": 0, 00:04:42.615 "high_priority_weight": 0, 00:04:42.615 "nvme_adminq_poll_period_us": 10000, 00:04:42.615 "nvme_ioq_poll_period_us": 0, 00:04:42.615 "io_queue_requests": 0, 00:04:42.615 "delay_cmd_submit": true, 00:04:42.615 "transport_retry_count": 4, 00:04:42.615 "bdev_retry_count": 3, 00:04:42.615 "transport_ack_timeout": 0, 00:04:42.615 "ctrlr_loss_timeout_sec": 0, 00:04:42.615 "reconnect_delay_sec": 0, 00:04:42.615 "fast_io_fail_timeout_sec": 0, 00:04:42.615 "disable_auto_failback": false, 00:04:42.615 "generate_uuids": false, 00:04:42.615 "transport_tos": 0, 00:04:42.615 "nvme_error_stat": false, 00:04:42.615 "rdma_srq_size": 0, 00:04:42.615 "io_path_stat": false, 00:04:42.615 "allow_accel_sequence": false, 00:04:42.615 "rdma_max_cq_size": 0, 00:04:42.615 "rdma_cm_event_timeout_ms": 0, 00:04:42.615 "dhchap_digests": [ 00:04:42.615 "sha256", 00:04:42.615 "sha384", 00:04:42.615 "sha512" 00:04:42.615 ], 00:04:42.615 "dhchap_dhgroups": [ 00:04:42.615 "null", 00:04:42.615 "ffdhe2048", 00:04:42.615 "ffdhe3072", 00:04:42.615 "ffdhe4096", 00:04:42.615 "ffdhe6144", 00:04:42.615 "ffdhe8192" 00:04:42.615 ] 00:04:42.615 } 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "method": "bdev_nvme_set_hotplug", 00:04:42.615 "params": { 00:04:42.615 "period_us": 100000, 00:04:42.615 "enable": false 00:04:42.615 } 00:04:42.615 }, 00:04:42.615 { 00:04:42.615 "method": "bdev_wait_for_examine" 00:04:42.615 } 00:04:42.615 ] 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "subsystem": "scsi", 00:04:42.616 "config": null 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "subsystem": "scheduler", 00:04:42.616 "config": [ 00:04:42.616 { 00:04:42.616 "method": "framework_set_scheduler", 00:04:42.616 "params": { 00:04:42.616 "name": "static" 00:04:42.616 } 00:04:42.616 } 00:04:42.616 ] 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "subsystem": "vhost_scsi", 00:04:42.616 "config": [] 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "subsystem": "vhost_blk", 00:04:42.616 "config": [] 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "subsystem": "ublk", 00:04:42.616 "config": [] 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "subsystem": "nbd", 00:04:42.616 "config": [] 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "subsystem": "nvmf", 00:04:42.616 "config": [ 00:04:42.616 { 00:04:42.616 "method": "nvmf_set_config", 00:04:42.616 "params": { 00:04:42.616 "discovery_filter": "match_any", 00:04:42.616 "admin_cmd_passthru": { 00:04:42.616 "identify_ctrlr": false 00:04:42.616 }, 00:04:42.616 "dhchap_digests": [ 00:04:42.616 "sha256", 00:04:42.616 "sha384", 00:04:42.616 "sha512" 00:04:42.616 ], 00:04:42.616 "dhchap_dhgroups": [ 00:04:42.616 "null", 00:04:42.616 "ffdhe2048", 00:04:42.616 "ffdhe3072", 00:04:42.616 "ffdhe4096", 00:04:42.616 "ffdhe6144", 00:04:42.616 "ffdhe8192" 00:04:42.616 ] 00:04:42.616 } 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "method": "nvmf_set_max_subsystems", 00:04:42.616 "params": { 00:04:42.616 "max_subsystems": 1024 00:04:42.616 } 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "method": "nvmf_set_crdt", 00:04:42.616 "params": { 00:04:42.616 "crdt1": 0, 00:04:42.616 "crdt2": 0, 00:04:42.616 "crdt3": 0 00:04:42.616 } 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "method": "nvmf_create_transport", 00:04:42.616 "params": { 00:04:42.616 "trtype": "TCP", 00:04:42.616 "max_queue_depth": 128, 00:04:42.616 "max_io_qpairs_per_ctrlr": 127, 00:04:42.616 "in_capsule_data_size": 4096, 00:04:42.616 "max_io_size": 131072, 00:04:42.616 "io_unit_size": 131072, 00:04:42.616 "max_aq_depth": 128, 00:04:42.616 "num_shared_buffers": 511, 00:04:42.616 "buf_cache_size": 4294967295, 00:04:42.616 "dif_insert_or_strip": false, 00:04:42.616 "zcopy": false, 00:04:42.616 "c2h_success": true, 00:04:42.616 "sock_priority": 0, 00:04:42.616 "abort_timeout_sec": 1, 00:04:42.616 "ack_timeout": 0, 00:04:42.616 "data_wr_pool_size": 0 00:04:42.616 } 00:04:42.616 } 00:04:42.616 ] 00:04:42.616 }, 00:04:42.616 { 00:04:42.616 "subsystem": "iscsi", 00:04:42.616 "config": [ 00:04:42.616 { 00:04:42.616 "method": "iscsi_set_options", 00:04:42.616 "params": { 00:04:42.616 "node_base": "iqn.2016-06.io.spdk", 00:04:42.616 "max_sessions": 128, 00:04:42.616 "max_connections_per_session": 2, 00:04:42.616 "max_queue_depth": 64, 00:04:42.616 "default_time2wait": 2, 00:04:42.616 "default_time2retain": 20, 00:04:42.616 "first_burst_length": 8192, 00:04:42.616 "immediate_data": true, 00:04:42.616 "allow_duplicated_isid": false, 00:04:42.616 "error_recovery_level": 0, 00:04:42.616 "nop_timeout": 60, 00:04:42.616 "nop_in_interval": 30, 00:04:42.616 "disable_chap": false, 00:04:42.616 "require_chap": false, 00:04:42.616 "mutual_chap": false, 00:04:42.616 "chap_group": 0, 00:04:42.616 "max_large_datain_per_connection": 64, 00:04:42.616 "max_r2t_per_connection": 4, 00:04:42.616 "pdu_pool_size": 36864, 00:04:42.616 "immediate_data_pool_size": 16384, 00:04:42.616 "data_out_pool_size": 2048 00:04:42.616 } 00:04:42.616 } 00:04:42.616 ] 00:04:42.616 } 00:04:42.616 ] 00:04:42.616 } 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57494 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57494 ']' 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57494 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57494 00:04:42.616 killing process with pid 57494 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57494' 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57494 00:04:42.616 06:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57494 00:04:43.996 06:28:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57534 00:04:43.996 06:28:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:43.996 06:28:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57534 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57534 ']' 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57534 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57534 00:04:49.262 killing process with pid 57534 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57534' 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57534 00:04:49.262 06:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57534 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.204 ************************************ 00:04:50.204 END TEST skip_rpc_with_json 00:04:50.204 ************************************ 00:04:50.204 00:04:50.204 real 0m8.622s 00:04:50.204 user 0m8.126s 00:04:50.204 sys 0m0.685s 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.204 06:28:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:50.204 06:28:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.204 06:28:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.204 06:28:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.204 ************************************ 00:04:50.204 START TEST skip_rpc_with_delay 00:04:50.204 ************************************ 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:50.204 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.464 [2024-11-19 06:28:42.173433] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:50.464 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:50.464 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.464 ************************************ 00:04:50.464 END TEST skip_rpc_with_delay 00:04:50.464 ************************************ 00:04:50.464 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.464 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.464 00:04:50.464 real 0m0.142s 00:04:50.464 user 0m0.077s 00:04:50.464 sys 0m0.058s 00:04:50.464 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.464 06:28:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:50.464 06:28:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:50.464 06:28:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:50.464 06:28:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:50.464 06:28:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.464 06:28:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.464 06:28:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.464 ************************************ 00:04:50.464 START TEST exit_on_failed_rpc_init 00:04:50.464 ************************************ 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57651 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57651 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57651 ']' 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.464 06:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.464 [2024-11-19 06:28:42.354192] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:50.464 [2024-11-19 06:28:42.354311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57651 ] 00:04:50.723 [2024-11-19 06:28:42.510395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.723 [2024-11-19 06:28:42.596777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:51.289 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.547 [2024-11-19 06:28:43.261448] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:51.547 [2024-11-19 06:28:43.261576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57669 ] 00:04:51.547 [2024-11-19 06:28:43.414153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.805 [2024-11-19 06:28:43.511008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.806 [2024-11-19 06:28:43.511090] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:51.806 [2024-11-19 06:28:43.511103] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:51.806 [2024-11-19 06:28:43.511116] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57651 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57651 ']' 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57651 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57651 00:04:51.806 killing process with pid 57651 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57651' 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57651 00:04:51.806 06:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57651 00:04:53.188 ************************************ 00:04:53.188 END TEST exit_on_failed_rpc_init 00:04:53.188 ************************************ 00:04:53.188 00:04:53.188 real 0m2.690s 00:04:53.188 user 0m2.951s 00:04:53.188 sys 0m0.444s 00:04:53.188 06:28:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.188 06:28:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.188 06:28:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.188 ************************************ 00:04:53.188 END TEST skip_rpc 00:04:53.188 ************************************ 00:04:53.188 00:04:53.188 real 0m18.122s 00:04:53.188 user 0m17.169s 00:04:53.188 sys 0m1.690s 00:04:53.188 06:28:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.188 06:28:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.188 06:28:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:53.188 06:28:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.188 06:28:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.188 06:28:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.188 ************************************ 00:04:53.188 START TEST rpc_client 00:04:53.188 ************************************ 00:04:53.188 06:28:45 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:53.448 * Looking for test storage... 00:04:53.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:53.448 06:28:45 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.448 06:28:45 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.448 06:28:45 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.448 06:28:45 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.448 06:28:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.448 06:28:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.448 06:28:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.448 06:28:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.449 06:28:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:53.449 06:28:45 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.449 06:28:45 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.449 --rc genhtml_branch_coverage=1 00:04:53.449 --rc genhtml_function_coverage=1 00:04:53.449 --rc genhtml_legend=1 00:04:53.449 --rc geninfo_all_blocks=1 00:04:53.449 --rc geninfo_unexecuted_blocks=1 00:04:53.449 00:04:53.449 ' 00:04:53.449 06:28:45 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.449 --rc genhtml_branch_coverage=1 00:04:53.449 --rc genhtml_function_coverage=1 00:04:53.449 --rc genhtml_legend=1 00:04:53.449 --rc geninfo_all_blocks=1 00:04:53.449 --rc geninfo_unexecuted_blocks=1 00:04:53.449 00:04:53.449 ' 00:04:53.449 06:28:45 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.449 --rc genhtml_branch_coverage=1 00:04:53.449 --rc genhtml_function_coverage=1 00:04:53.449 --rc genhtml_legend=1 00:04:53.449 --rc geninfo_all_blocks=1 00:04:53.449 --rc geninfo_unexecuted_blocks=1 00:04:53.449 00:04:53.449 ' 00:04:53.449 06:28:45 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.449 --rc genhtml_branch_coverage=1 00:04:53.449 --rc genhtml_function_coverage=1 00:04:53.449 --rc genhtml_legend=1 00:04:53.449 --rc geninfo_all_blocks=1 00:04:53.449 --rc geninfo_unexecuted_blocks=1 00:04:53.449 00:04:53.449 ' 00:04:53.449 06:28:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:53.449 OK 00:04:53.449 06:28:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.449 00:04:53.449 real 0m0.225s 00:04:53.449 user 0m0.129s 00:04:53.449 sys 0m0.097s 00:04:53.449 06:28:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.449 ************************************ 00:04:53.449 END TEST rpc_client 00:04:53.449 ************************************ 00:04:53.449 06:28:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:53.449 06:28:45 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:53.449 06:28:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.449 06:28:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.449 06:28:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.449 ************************************ 00:04:53.449 START TEST json_config 00:04:53.449 ************************************ 00:04:53.449 06:28:45 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:53.710 06:28:45 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.710 06:28:45 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.710 06:28:45 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.710 06:28:45 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.710 06:28:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.710 06:28:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.710 06:28:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.710 06:28:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.710 06:28:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.710 06:28:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.710 06:28:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.710 06:28:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.710 06:28:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.710 06:28:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.711 06:28:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.711 06:28:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:53.711 06:28:45 json_config -- scripts/common.sh@345 -- # : 1 00:04:53.711 06:28:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.711 06:28:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.711 06:28:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:53.711 06:28:45 json_config -- scripts/common.sh@353 -- # local d=1 00:04:53.711 06:28:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.711 06:28:45 json_config -- scripts/common.sh@355 -- # echo 1 00:04:53.711 06:28:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.711 06:28:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:53.711 06:28:45 json_config -- scripts/common.sh@353 -- # local d=2 00:04:53.711 06:28:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.711 06:28:45 json_config -- scripts/common.sh@355 -- # echo 2 00:04:53.711 06:28:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.711 06:28:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.711 06:28:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.711 06:28:45 json_config -- scripts/common.sh@368 -- # return 0 00:04:53.711 06:28:45 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.711 06:28:45 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.711 --rc genhtml_branch_coverage=1 00:04:53.711 --rc genhtml_function_coverage=1 00:04:53.711 --rc genhtml_legend=1 00:04:53.711 --rc geninfo_all_blocks=1 00:04:53.711 --rc geninfo_unexecuted_blocks=1 00:04:53.711 00:04:53.711 ' 00:04:53.711 06:28:45 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.711 --rc genhtml_branch_coverage=1 00:04:53.711 --rc genhtml_function_coverage=1 00:04:53.711 --rc genhtml_legend=1 00:04:53.711 --rc geninfo_all_blocks=1 00:04:53.711 --rc geninfo_unexecuted_blocks=1 00:04:53.711 00:04:53.711 ' 00:04:53.711 06:28:45 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.711 --rc genhtml_branch_coverage=1 00:04:53.711 --rc genhtml_function_coverage=1 00:04:53.711 --rc genhtml_legend=1 00:04:53.711 --rc geninfo_all_blocks=1 00:04:53.711 --rc geninfo_unexecuted_blocks=1 00:04:53.711 00:04:53.711 ' 00:04:53.711 06:28:45 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.711 --rc genhtml_branch_coverage=1 00:04:53.711 --rc genhtml_function_coverage=1 00:04:53.711 --rc genhtml_legend=1 00:04:53.711 --rc geninfo_all_blocks=1 00:04:53.711 --rc geninfo_unexecuted_blocks=1 00:04:53.711 00:04:53.711 ' 00:04:53.711 06:28:45 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcd17b17-72e5-4db9-b5dd-4e7cd1a93bdd 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=dcd17b17-72e5-4db9-b5dd-4e7cd1a93bdd 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.711 06:28:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.711 06:28:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.711 06:28:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.711 06:28:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.711 06:28:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.711 06:28:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.711 06:28:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.711 06:28:45 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.711 06:28:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@51 -- # : 0 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.711 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.711 06:28:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.711 06:28:45 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:53.711 06:28:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:53.711 06:28:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:53.711 06:28:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:53.711 WARNING: No tests are enabled so not running JSON configuration tests 00:04:53.711 06:28:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.711 06:28:45 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:53.711 06:28:45 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:53.711 00:04:53.711 real 0m0.149s 00:04:53.711 user 0m0.085s 00:04:53.711 sys 0m0.061s 00:04:53.711 06:28:45 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.711 ************************************ 00:04:53.711 END TEST json_config 00:04:53.711 ************************************ 00:04:53.711 06:28:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.711 06:28:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:53.711 06:28:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.711 06:28:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.711 06:28:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.711 ************************************ 00:04:53.711 START TEST json_config_extra_key 00:04:53.711 ************************************ 00:04:53.711 06:28:45 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:53.711 06:28:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.711 06:28:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.711 06:28:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.974 06:28:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.974 06:28:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:53.974 06:28:45 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.974 06:28:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.974 --rc genhtml_branch_coverage=1 00:04:53.974 --rc genhtml_function_coverage=1 00:04:53.974 --rc genhtml_legend=1 00:04:53.974 --rc geninfo_all_blocks=1 00:04:53.974 --rc geninfo_unexecuted_blocks=1 00:04:53.974 00:04:53.974 ' 00:04:53.974 06:28:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.974 --rc genhtml_branch_coverage=1 00:04:53.974 --rc genhtml_function_coverage=1 00:04:53.974 --rc genhtml_legend=1 00:04:53.974 --rc geninfo_all_blocks=1 00:04:53.974 --rc geninfo_unexecuted_blocks=1 00:04:53.974 00:04:53.974 ' 00:04:53.974 06:28:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.974 --rc genhtml_branch_coverage=1 00:04:53.974 --rc genhtml_function_coverage=1 00:04:53.974 --rc genhtml_legend=1 00:04:53.974 --rc geninfo_all_blocks=1 00:04:53.974 --rc geninfo_unexecuted_blocks=1 00:04:53.974 00:04:53.974 ' 00:04:53.974 06:28:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.974 --rc genhtml_branch_coverage=1 00:04:53.974 --rc genhtml_function_coverage=1 00:04:53.974 --rc genhtml_legend=1 00:04:53.974 --rc geninfo_all_blocks=1 00:04:53.974 --rc geninfo_unexecuted_blocks=1 00:04:53.974 00:04:53.974 ' 00:04:53.974 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcd17b17-72e5-4db9-b5dd-4e7cd1a93bdd 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=dcd17b17-72e5-4db9-b5dd-4e7cd1a93bdd 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.974 06:28:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.975 06:28:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.975 06:28:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.975 06:28:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.975 06:28:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.975 06:28:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.975 06:28:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.975 06:28:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.975 06:28:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:53.975 06:28:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.975 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.975 06:28:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:53.975 INFO: launching applications... 00:04:53.975 06:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57862 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.975 Waiting for target to run... 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57862 /var/tmp/spdk_tgt.sock 00:04:53.975 06:28:45 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57862 ']' 00:04:53.975 06:28:45 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:53.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.975 06:28:45 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.975 06:28:45 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.975 06:28:45 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.975 06:28:45 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.975 06:28:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.975 [2024-11-19 06:28:45.840867] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:53.975 [2024-11-19 06:28:45.841357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57862 ] 00:04:54.620 [2024-11-19 06:28:46.364427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.620 [2024-11-19 06:28:46.496637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.192 00:04:55.192 INFO: shutting down applications... 00:04:55.192 06:28:47 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.192 06:28:47 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:55.192 06:28:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:55.192 06:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:55.192 06:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:55.192 06:28:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:55.192 06:28:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:55.192 06:28:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57862 ]] 00:04:55.192 06:28:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57862 00:04:55.192 06:28:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:55.192 06:28:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.192 06:28:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57862 00:04:55.192 06:28:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.764 06:28:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.764 06:28:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.764 06:28:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57862 00:04:55.764 06:28:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.336 06:28:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.336 06:28:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.336 06:28:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57862 00:04:56.336 06:28:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.904 06:28:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.904 06:28:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.904 06:28:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57862 00:04:56.904 06:28:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.166 06:28:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.166 06:28:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.166 06:28:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57862 00:04:57.166 SPDK target shutdown done 00:04:57.166 Success 00:04:57.166 06:28:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:57.166 06:28:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:57.166 06:28:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:57.166 06:28:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:57.166 06:28:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:57.166 ************************************ 00:04:57.166 END TEST json_config_extra_key 00:04:57.166 ************************************ 00:04:57.166 00:04:57.166 real 0m3.516s 00:04:57.166 user 0m2.897s 00:04:57.166 sys 0m0.687s 00:04:57.166 06:28:49 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.166 06:28:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.425 06:28:49 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.425 06:28:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.425 06:28:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.425 06:28:49 -- common/autotest_common.sh@10 -- # set +x 00:04:57.425 ************************************ 00:04:57.425 START TEST alias_rpc 00:04:57.425 ************************************ 00:04:57.425 06:28:49 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.425 * Looking for test storage... 00:04:57.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:57.425 06:28:49 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.425 06:28:49 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.425 06:28:49 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.425 06:28:49 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.425 06:28:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.426 06:28:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.426 06:28:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.426 06:28:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.426 06:28:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.426 06:28:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.426 06:28:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.426 06:28:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.426 06:28:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.426 --rc genhtml_branch_coverage=1 00:04:57.426 --rc genhtml_function_coverage=1 00:04:57.426 --rc genhtml_legend=1 00:04:57.426 --rc geninfo_all_blocks=1 00:04:57.426 --rc geninfo_unexecuted_blocks=1 00:04:57.426 00:04:57.426 ' 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.426 --rc genhtml_branch_coverage=1 00:04:57.426 --rc genhtml_function_coverage=1 00:04:57.426 --rc genhtml_legend=1 00:04:57.426 --rc geninfo_all_blocks=1 00:04:57.426 --rc geninfo_unexecuted_blocks=1 00:04:57.426 00:04:57.426 ' 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.426 --rc genhtml_branch_coverage=1 00:04:57.426 --rc genhtml_function_coverage=1 00:04:57.426 --rc genhtml_legend=1 00:04:57.426 --rc geninfo_all_blocks=1 00:04:57.426 --rc geninfo_unexecuted_blocks=1 00:04:57.426 00:04:57.426 ' 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.426 --rc genhtml_branch_coverage=1 00:04:57.426 --rc genhtml_function_coverage=1 00:04:57.426 --rc genhtml_legend=1 00:04:57.426 --rc geninfo_all_blocks=1 00:04:57.426 --rc geninfo_unexecuted_blocks=1 00:04:57.426 00:04:57.426 ' 00:04:57.426 06:28:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.426 06:28:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57961 00:04:57.426 06:28:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57961 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57961 ']' 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.426 06:28:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.426 06:28:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.683 [2024-11-19 06:28:49.384126] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:57.683 [2024-11-19 06:28:49.384253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57961 ] 00:04:57.683 [2024-11-19 06:28:49.543254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.941 [2024-11-19 06:28:49.656265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.508 06:28:50 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.508 06:28:50 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.508 06:28:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:58.766 06:28:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57961 00:04:58.766 06:28:50 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57961 ']' 00:04:58.766 06:28:50 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57961 00:04:58.766 06:28:50 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:58.766 06:28:50 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.766 06:28:50 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57961 00:04:58.766 killing process with pid 57961 00:04:58.766 06:28:50 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.766 06:28:50 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.767 06:28:50 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57961' 00:04:58.767 06:28:50 alias_rpc -- common/autotest_common.sh@973 -- # kill 57961 00:04:58.767 06:28:50 alias_rpc -- common/autotest_common.sh@978 -- # wait 57961 00:05:00.142 ************************************ 00:05:00.142 END TEST alias_rpc 00:05:00.142 ************************************ 00:05:00.142 00:05:00.142 real 0m2.727s 00:05:00.142 user 0m2.761s 00:05:00.142 sys 0m0.488s 00:05:00.142 06:28:51 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.142 06:28:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.142 06:28:51 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:00.142 06:28:51 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.142 06:28:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.142 06:28:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.142 06:28:51 -- common/autotest_common.sh@10 -- # set +x 00:05:00.143 ************************************ 00:05:00.143 START TEST spdkcli_tcp 00:05:00.143 ************************************ 00:05:00.143 06:28:51 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.143 * Looking for test storage... 00:05:00.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:00.143 06:28:51 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.143 06:28:51 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.143 06:28:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.143 06:28:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.143 06:28:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:00.401 06:28:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.401 06:28:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.401 06:28:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.401 06:28:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:00.401 06:28:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.401 06:28:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.401 --rc genhtml_branch_coverage=1 00:05:00.401 --rc genhtml_function_coverage=1 00:05:00.401 --rc genhtml_legend=1 00:05:00.401 --rc geninfo_all_blocks=1 00:05:00.401 --rc geninfo_unexecuted_blocks=1 00:05:00.401 00:05:00.401 ' 00:05:00.401 06:28:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.401 --rc genhtml_branch_coverage=1 00:05:00.401 --rc genhtml_function_coverage=1 00:05:00.401 --rc genhtml_legend=1 00:05:00.401 --rc geninfo_all_blocks=1 00:05:00.401 --rc geninfo_unexecuted_blocks=1 00:05:00.401 00:05:00.401 ' 00:05:00.401 06:28:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.401 --rc genhtml_branch_coverage=1 00:05:00.402 --rc genhtml_function_coverage=1 00:05:00.402 --rc genhtml_legend=1 00:05:00.402 --rc geninfo_all_blocks=1 00:05:00.402 --rc geninfo_unexecuted_blocks=1 00:05:00.402 00:05:00.402 ' 00:05:00.402 06:28:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.402 --rc genhtml_branch_coverage=1 00:05:00.402 --rc genhtml_function_coverage=1 00:05:00.402 --rc genhtml_legend=1 00:05:00.402 --rc geninfo_all_blocks=1 00:05:00.402 --rc geninfo_unexecuted_blocks=1 00:05:00.402 00:05:00.402 ' 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:00.402 06:28:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.402 06:28:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58052 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58052 00:05:00.402 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:00.402 06:28:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58052 ']' 00:05:00.402 06:28:52 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.402 06:28:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.402 06:28:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.402 06:28:52 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.402 06:28:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.402 [2024-11-19 06:28:52.153715] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:00.402 [2024-11-19 06:28:52.153822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58052 ] 00:05:00.402 [2024-11-19 06:28:52.310221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.660 [2024-11-19 06:28:52.404739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.660 [2024-11-19 06:28:52.404805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.227 06:28:52 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.227 06:28:52 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:01.227 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58068 00:05:01.227 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:01.227 06:28:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.487 [ 00:05:01.487 "bdev_malloc_delete", 00:05:01.487 "bdev_malloc_create", 00:05:01.487 "bdev_null_resize", 00:05:01.487 "bdev_null_delete", 00:05:01.487 "bdev_null_create", 00:05:01.487 "bdev_nvme_cuse_unregister", 00:05:01.487 "bdev_nvme_cuse_register", 00:05:01.487 "bdev_opal_new_user", 00:05:01.487 "bdev_opal_set_lock_state", 00:05:01.487 "bdev_opal_delete", 00:05:01.487 "bdev_opal_get_info", 00:05:01.487 "bdev_opal_create", 00:05:01.487 "bdev_nvme_opal_revert", 00:05:01.487 "bdev_nvme_opal_init", 00:05:01.487 "bdev_nvme_send_cmd", 00:05:01.487 "bdev_nvme_set_keys", 00:05:01.487 "bdev_nvme_get_path_iostat", 00:05:01.487 "bdev_nvme_get_mdns_discovery_info", 00:05:01.487 "bdev_nvme_stop_mdns_discovery", 00:05:01.487 "bdev_nvme_start_mdns_discovery", 00:05:01.487 "bdev_nvme_set_multipath_policy", 00:05:01.487 "bdev_nvme_set_preferred_path", 00:05:01.487 "bdev_nvme_get_io_paths", 00:05:01.487 "bdev_nvme_remove_error_injection", 00:05:01.487 "bdev_nvme_add_error_injection", 00:05:01.487 "bdev_nvme_get_discovery_info", 00:05:01.487 "bdev_nvme_stop_discovery", 00:05:01.487 "bdev_nvme_start_discovery", 00:05:01.487 "bdev_nvme_get_controller_health_info", 00:05:01.487 "bdev_nvme_disable_controller", 00:05:01.487 "bdev_nvme_enable_controller", 00:05:01.487 "bdev_nvme_reset_controller", 00:05:01.487 "bdev_nvme_get_transport_statistics", 00:05:01.487 "bdev_nvme_apply_firmware", 00:05:01.487 "bdev_nvme_detach_controller", 00:05:01.487 "bdev_nvme_get_controllers", 00:05:01.487 "bdev_nvme_attach_controller", 00:05:01.487 "bdev_nvme_set_hotplug", 00:05:01.487 "bdev_nvme_set_options", 00:05:01.487 "bdev_passthru_delete", 00:05:01.487 "bdev_passthru_create", 00:05:01.487 "bdev_lvol_set_parent_bdev", 00:05:01.487 "bdev_lvol_set_parent", 00:05:01.487 "bdev_lvol_check_shallow_copy", 00:05:01.487 "bdev_lvol_start_shallow_copy", 00:05:01.487 "bdev_lvol_grow_lvstore", 00:05:01.487 "bdev_lvol_get_lvols", 00:05:01.487 "bdev_lvol_get_lvstores", 00:05:01.487 "bdev_lvol_delete", 00:05:01.487 "bdev_lvol_set_read_only", 00:05:01.487 "bdev_lvol_resize", 00:05:01.487 "bdev_lvol_decouple_parent", 00:05:01.487 "bdev_lvol_inflate", 00:05:01.487 "bdev_lvol_rename", 00:05:01.487 "bdev_lvol_clone_bdev", 00:05:01.487 "bdev_lvol_clone", 00:05:01.487 "bdev_lvol_snapshot", 00:05:01.487 "bdev_lvol_create", 00:05:01.487 "bdev_lvol_delete_lvstore", 00:05:01.487 "bdev_lvol_rename_lvstore", 00:05:01.487 "bdev_lvol_create_lvstore", 00:05:01.487 "bdev_raid_set_options", 00:05:01.487 "bdev_raid_remove_base_bdev", 00:05:01.487 "bdev_raid_add_base_bdev", 00:05:01.487 "bdev_raid_delete", 00:05:01.487 "bdev_raid_create", 00:05:01.487 "bdev_raid_get_bdevs", 00:05:01.487 "bdev_error_inject_error", 00:05:01.487 "bdev_error_delete", 00:05:01.487 "bdev_error_create", 00:05:01.487 "bdev_split_delete", 00:05:01.487 "bdev_split_create", 00:05:01.487 "bdev_delay_delete", 00:05:01.487 "bdev_delay_create", 00:05:01.487 "bdev_delay_update_latency", 00:05:01.487 "bdev_zone_block_delete", 00:05:01.487 "bdev_zone_block_create", 00:05:01.487 "blobfs_create", 00:05:01.487 "blobfs_detect", 00:05:01.487 "blobfs_set_cache_size", 00:05:01.487 "bdev_xnvme_delete", 00:05:01.487 "bdev_xnvme_create", 00:05:01.487 "bdev_aio_delete", 00:05:01.487 "bdev_aio_rescan", 00:05:01.487 "bdev_aio_create", 00:05:01.487 "bdev_ftl_set_property", 00:05:01.487 "bdev_ftl_get_properties", 00:05:01.487 "bdev_ftl_get_stats", 00:05:01.487 "bdev_ftl_unmap", 00:05:01.487 "bdev_ftl_unload", 00:05:01.487 "bdev_ftl_delete", 00:05:01.487 "bdev_ftl_load", 00:05:01.487 "bdev_ftl_create", 00:05:01.487 "bdev_virtio_attach_controller", 00:05:01.487 "bdev_virtio_scsi_get_devices", 00:05:01.487 "bdev_virtio_detach_controller", 00:05:01.487 "bdev_virtio_blk_set_hotplug", 00:05:01.487 "bdev_iscsi_delete", 00:05:01.487 "bdev_iscsi_create", 00:05:01.487 "bdev_iscsi_set_options", 00:05:01.487 "accel_error_inject_error", 00:05:01.487 "ioat_scan_accel_module", 00:05:01.487 "dsa_scan_accel_module", 00:05:01.487 "iaa_scan_accel_module", 00:05:01.487 "keyring_file_remove_key", 00:05:01.487 "keyring_file_add_key", 00:05:01.487 "keyring_linux_set_options", 00:05:01.487 "fsdev_aio_delete", 00:05:01.487 "fsdev_aio_create", 00:05:01.487 "iscsi_get_histogram", 00:05:01.487 "iscsi_enable_histogram", 00:05:01.487 "iscsi_set_options", 00:05:01.487 "iscsi_get_auth_groups", 00:05:01.487 "iscsi_auth_group_remove_secret", 00:05:01.487 "iscsi_auth_group_add_secret", 00:05:01.487 "iscsi_delete_auth_group", 00:05:01.487 "iscsi_create_auth_group", 00:05:01.487 "iscsi_set_discovery_auth", 00:05:01.487 "iscsi_get_options", 00:05:01.487 "iscsi_target_node_request_logout", 00:05:01.487 "iscsi_target_node_set_redirect", 00:05:01.487 "iscsi_target_node_set_auth", 00:05:01.487 "iscsi_target_node_add_lun", 00:05:01.487 "iscsi_get_stats", 00:05:01.487 "iscsi_get_connections", 00:05:01.487 "iscsi_portal_group_set_auth", 00:05:01.487 "iscsi_start_portal_group", 00:05:01.487 "iscsi_delete_portal_group", 00:05:01.487 "iscsi_create_portal_group", 00:05:01.487 "iscsi_get_portal_groups", 00:05:01.487 "iscsi_delete_target_node", 00:05:01.487 "iscsi_target_node_remove_pg_ig_maps", 00:05:01.487 "iscsi_target_node_add_pg_ig_maps", 00:05:01.487 "iscsi_create_target_node", 00:05:01.487 "iscsi_get_target_nodes", 00:05:01.487 "iscsi_delete_initiator_group", 00:05:01.487 "iscsi_initiator_group_remove_initiators", 00:05:01.487 "iscsi_initiator_group_add_initiators", 00:05:01.487 "iscsi_create_initiator_group", 00:05:01.487 "iscsi_get_initiator_groups", 00:05:01.487 "nvmf_set_crdt", 00:05:01.487 "nvmf_set_config", 00:05:01.487 "nvmf_set_max_subsystems", 00:05:01.487 "nvmf_stop_mdns_prr", 00:05:01.487 "nvmf_publish_mdns_prr", 00:05:01.487 "nvmf_subsystem_get_listeners", 00:05:01.487 "nvmf_subsystem_get_qpairs", 00:05:01.487 "nvmf_subsystem_get_controllers", 00:05:01.487 "nvmf_get_stats", 00:05:01.487 "nvmf_get_transports", 00:05:01.487 "nvmf_create_transport", 00:05:01.487 "nvmf_get_targets", 00:05:01.487 "nvmf_delete_target", 00:05:01.487 "nvmf_create_target", 00:05:01.487 "nvmf_subsystem_allow_any_host", 00:05:01.487 "nvmf_subsystem_set_keys", 00:05:01.487 "nvmf_subsystem_remove_host", 00:05:01.487 "nvmf_subsystem_add_host", 00:05:01.487 "nvmf_ns_remove_host", 00:05:01.487 "nvmf_ns_add_host", 00:05:01.487 "nvmf_subsystem_remove_ns", 00:05:01.487 "nvmf_subsystem_set_ns_ana_group", 00:05:01.487 "nvmf_subsystem_add_ns", 00:05:01.487 "nvmf_subsystem_listener_set_ana_state", 00:05:01.487 "nvmf_discovery_get_referrals", 00:05:01.487 "nvmf_discovery_remove_referral", 00:05:01.487 "nvmf_discovery_add_referral", 00:05:01.487 "nvmf_subsystem_remove_listener", 00:05:01.487 "nvmf_subsystem_add_listener", 00:05:01.487 "nvmf_delete_subsystem", 00:05:01.487 "nvmf_create_subsystem", 00:05:01.487 "nvmf_get_subsystems", 00:05:01.487 "env_dpdk_get_mem_stats", 00:05:01.487 "nbd_get_disks", 00:05:01.487 "nbd_stop_disk", 00:05:01.487 "nbd_start_disk", 00:05:01.487 "ublk_recover_disk", 00:05:01.487 "ublk_get_disks", 00:05:01.487 "ublk_stop_disk", 00:05:01.487 "ublk_start_disk", 00:05:01.487 "ublk_destroy_target", 00:05:01.487 "ublk_create_target", 00:05:01.487 "virtio_blk_create_transport", 00:05:01.487 "virtio_blk_get_transports", 00:05:01.487 "vhost_controller_set_coalescing", 00:05:01.487 "vhost_get_controllers", 00:05:01.487 "vhost_delete_controller", 00:05:01.487 "vhost_create_blk_controller", 00:05:01.487 "vhost_scsi_controller_remove_target", 00:05:01.487 "vhost_scsi_controller_add_target", 00:05:01.487 "vhost_start_scsi_controller", 00:05:01.487 "vhost_create_scsi_controller", 00:05:01.487 "thread_set_cpumask", 00:05:01.487 "scheduler_set_options", 00:05:01.487 "framework_get_governor", 00:05:01.487 "framework_get_scheduler", 00:05:01.487 "framework_set_scheduler", 00:05:01.487 "framework_get_reactors", 00:05:01.487 "thread_get_io_channels", 00:05:01.487 "thread_get_pollers", 00:05:01.487 "thread_get_stats", 00:05:01.487 "framework_monitor_context_switch", 00:05:01.487 "spdk_kill_instance", 00:05:01.487 "log_enable_timestamps", 00:05:01.487 "log_get_flags", 00:05:01.487 "log_clear_flag", 00:05:01.487 "log_set_flag", 00:05:01.487 "log_get_level", 00:05:01.487 "log_set_level", 00:05:01.487 "log_get_print_level", 00:05:01.487 "log_set_print_level", 00:05:01.487 "framework_enable_cpumask_locks", 00:05:01.487 "framework_disable_cpumask_locks", 00:05:01.487 "framework_wait_init", 00:05:01.487 "framework_start_init", 00:05:01.487 "scsi_get_devices", 00:05:01.487 "bdev_get_histogram", 00:05:01.487 "bdev_enable_histogram", 00:05:01.487 "bdev_set_qos_limit", 00:05:01.487 "bdev_set_qd_sampling_period", 00:05:01.487 "bdev_get_bdevs", 00:05:01.487 "bdev_reset_iostat", 00:05:01.487 "bdev_get_iostat", 00:05:01.488 "bdev_examine", 00:05:01.488 "bdev_wait_for_examine", 00:05:01.488 "bdev_set_options", 00:05:01.488 "accel_get_stats", 00:05:01.488 "accel_set_options", 00:05:01.488 "accel_set_driver", 00:05:01.488 "accel_crypto_key_destroy", 00:05:01.488 "accel_crypto_keys_get", 00:05:01.488 "accel_crypto_key_create", 00:05:01.488 "accel_assign_opc", 00:05:01.488 "accel_get_module_info", 00:05:01.488 "accel_get_opc_assignments", 00:05:01.488 "vmd_rescan", 00:05:01.488 "vmd_remove_device", 00:05:01.488 "vmd_enable", 00:05:01.488 "sock_get_default_impl", 00:05:01.488 "sock_set_default_impl", 00:05:01.488 "sock_impl_set_options", 00:05:01.488 "sock_impl_get_options", 00:05:01.488 "iobuf_get_stats", 00:05:01.488 "iobuf_set_options", 00:05:01.488 "keyring_get_keys", 00:05:01.488 "framework_get_pci_devices", 00:05:01.488 "framework_get_config", 00:05:01.488 "framework_get_subsystems", 00:05:01.488 "fsdev_set_opts", 00:05:01.488 "fsdev_get_opts", 00:05:01.488 "trace_get_info", 00:05:01.488 "trace_get_tpoint_group_mask", 00:05:01.488 "trace_disable_tpoint_group", 00:05:01.488 "trace_enable_tpoint_group", 00:05:01.488 "trace_clear_tpoint_mask", 00:05:01.488 "trace_set_tpoint_mask", 00:05:01.488 "notify_get_notifications", 00:05:01.488 "notify_get_types", 00:05:01.488 "spdk_get_version", 00:05:01.488 "rpc_get_methods" 00:05:01.488 ] 00:05:01.488 06:28:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.488 06:28:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:01.488 06:28:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58052 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58052 ']' 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58052 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58052 00:05:01.488 killing process with pid 58052 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58052' 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58052 00:05:01.488 06:28:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58052 00:05:02.862 ************************************ 00:05:02.862 END TEST spdkcli_tcp 00:05:02.862 ************************************ 00:05:02.862 00:05:02.862 real 0m2.566s 00:05:02.862 user 0m4.551s 00:05:02.862 sys 0m0.469s 00:05:02.862 06:28:54 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.862 06:28:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.862 06:28:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.862 06:28:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.862 06:28:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.862 06:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:02.862 ************************************ 00:05:02.862 START TEST dpdk_mem_utility 00:05:02.862 ************************************ 00:05:02.862 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.862 * Looking for test storage... 00:05:02.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:02.862 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.862 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.862 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.862 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.862 06:28:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:02.862 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.862 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.862 --rc genhtml_branch_coverage=1 00:05:02.862 --rc genhtml_function_coverage=1 00:05:02.862 --rc genhtml_legend=1 00:05:02.862 --rc geninfo_all_blocks=1 00:05:02.863 --rc geninfo_unexecuted_blocks=1 00:05:02.863 00:05:02.863 ' 00:05:02.863 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.863 --rc genhtml_branch_coverage=1 00:05:02.863 --rc genhtml_function_coverage=1 00:05:02.863 --rc genhtml_legend=1 00:05:02.863 --rc geninfo_all_blocks=1 00:05:02.863 --rc geninfo_unexecuted_blocks=1 00:05:02.863 00:05:02.863 ' 00:05:02.863 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.863 --rc genhtml_branch_coverage=1 00:05:02.863 --rc genhtml_function_coverage=1 00:05:02.863 --rc genhtml_legend=1 00:05:02.863 --rc geninfo_all_blocks=1 00:05:02.863 --rc geninfo_unexecuted_blocks=1 00:05:02.863 00:05:02.863 ' 00:05:02.863 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.863 --rc genhtml_branch_coverage=1 00:05:02.863 --rc genhtml_function_coverage=1 00:05:02.863 --rc genhtml_legend=1 00:05:02.863 --rc geninfo_all_blocks=1 00:05:02.863 --rc geninfo_unexecuted_blocks=1 00:05:02.863 00:05:02.863 ' 00:05:02.863 06:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:02.863 06:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58157 00:05:02.863 06:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.863 06:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58157 00:05:02.863 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58157 ']' 00:05:02.863 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.863 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.863 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.863 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.863 06:28:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.863 [2024-11-19 06:28:54.762216] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:02.863 [2024-11-19 06:28:54.762461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58157 ] 00:05:03.120 [2024-11-19 06:28:54.920497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.120 [2024-11-19 06:28:55.026900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.059 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.059 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:04.059 06:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.059 06:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.059 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.059 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.059 { 00:05:04.059 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.059 } 00:05:04.059 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.059 06:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:04.059 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:04.059 1 heaps totaling size 816.000000 MiB 00:05:04.059 size: 816.000000 MiB heap id: 0 00:05:04.059 end heaps---------- 00:05:04.059 9 mempools totaling size 595.772034 MiB 00:05:04.059 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.059 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.059 size: 92.545471 MiB name: bdev_io_58157 00:05:04.059 size: 50.003479 MiB name: msgpool_58157 00:05:04.059 size: 36.509338 MiB name: fsdev_io_58157 00:05:04.059 size: 21.763794 MiB name: PDU_Pool 00:05:04.059 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.059 size: 4.133484 MiB name: evtpool_58157 00:05:04.059 size: 0.026123 MiB name: Session_Pool 00:05:04.059 end mempools------- 00:05:04.059 6 memzones totaling size 4.142822 MiB 00:05:04.059 size: 1.000366 MiB name: RG_ring_0_58157 00:05:04.059 size: 1.000366 MiB name: RG_ring_1_58157 00:05:04.059 size: 1.000366 MiB name: RG_ring_4_58157 00:05:04.059 size: 1.000366 MiB name: RG_ring_5_58157 00:05:04.059 size: 0.125366 MiB name: RG_ring_2_58157 00:05:04.059 size: 0.015991 MiB name: RG_ring_3_58157 00:05:04.059 end memzones------- 00:05:04.059 06:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.059 heap id: 0 total size: 816.000000 MiB number of busy elements: 325 number of free elements: 18 00:05:04.060 list of free elements. size: 16.788940 MiB 00:05:04.060 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:04.060 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:04.060 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:04.060 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:04.060 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:04.060 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:04.060 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:04.060 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:04.060 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:04.060 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:04.060 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:04.060 element at address: 0x20001ac00000 with size: 0.559021 MiB 00:05:04.060 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:04.060 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:04.060 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:04.060 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:04.060 element at address: 0x200028000000 with size: 0.390930 MiB 00:05:04.060 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:04.060 list of standard malloc elements. size: 199.290161 MiB 00:05:04.060 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:04.060 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:04.060 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:04.060 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:04.060 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:04.060 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:04.060 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:04.060 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:04.060 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:04.060 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:04.060 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:04.060 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:04.060 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:04.061 element at address: 0x200028064140 with size: 0.000244 MiB 00:05:04.061 element at address: 0x200028064240 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806af00 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:04.061 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:04.061 list of memzone associated elements. size: 599.920898 MiB 00:05:04.061 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:04.061 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.061 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:04.061 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.061 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:04.061 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58157_0 00:05:04.061 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:04.061 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58157_0 00:05:04.061 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:04.061 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58157_0 00:05:04.061 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:04.062 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.062 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:04.062 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.062 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:04.062 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58157_0 00:05:04.062 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:04.062 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58157 00:05:04.062 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:04.062 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58157 00:05:04.062 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:04.062 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.062 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:04.062 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.062 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:04.062 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.062 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:04.062 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.062 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:04.062 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58157 00:05:04.062 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:04.062 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58157 00:05:04.062 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:04.062 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58157 00:05:04.062 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:04.062 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58157 00:05:04.062 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:04.062 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58157 00:05:04.062 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:04.062 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58157 00:05:04.062 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:04.062 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.062 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:04.062 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.062 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:04.062 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.062 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:04.062 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58157 00:05:04.062 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:04.062 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58157 00:05:04.062 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:04.062 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.062 element at address: 0x200028064340 with size: 0.023804 MiB 00:05:04.062 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.062 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:04.062 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58157 00:05:04.062 element at address: 0x20002806a4c0 with size: 0.002502 MiB 00:05:04.062 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.062 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:04.062 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58157 00:05:04.062 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:04.062 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58157 00:05:04.062 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:04.062 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58157 00:05:04.062 element at address: 0x20002806b000 with size: 0.000366 MiB 00:05:04.062 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.062 06:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.062 06:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58157 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58157 ']' 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58157 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58157 00:05:04.062 killing process with pid 58157 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58157' 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58157 00:05:04.062 06:28:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58157 00:05:05.499 ************************************ 00:05:05.499 END TEST dpdk_mem_utility 00:05:05.499 ************************************ 00:05:05.499 00:05:05.499 real 0m2.546s 00:05:05.499 user 0m2.474s 00:05:05.499 sys 0m0.432s 00:05:05.499 06:28:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.499 06:28:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.499 06:28:57 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.499 06:28:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.499 06:28:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.499 06:28:57 -- common/autotest_common.sh@10 -- # set +x 00:05:05.499 ************************************ 00:05:05.499 START TEST event 00:05:05.499 ************************************ 00:05:05.499 06:28:57 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.500 * Looking for test storage... 00:05:05.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.500 06:28:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.500 06:28:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.500 06:28:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.500 06:28:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.500 06:28:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.500 06:28:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.500 06:28:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.500 06:28:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.500 06:28:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.500 06:28:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.500 06:28:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.500 06:28:57 event -- scripts/common.sh@344 -- # case "$op" in 00:05:05.500 06:28:57 event -- scripts/common.sh@345 -- # : 1 00:05:05.500 06:28:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.500 06:28:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.500 06:28:57 event -- scripts/common.sh@365 -- # decimal 1 00:05:05.500 06:28:57 event -- scripts/common.sh@353 -- # local d=1 00:05:05.500 06:28:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.500 06:28:57 event -- scripts/common.sh@355 -- # echo 1 00:05:05.500 06:28:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.500 06:28:57 event -- scripts/common.sh@366 -- # decimal 2 00:05:05.500 06:28:57 event -- scripts/common.sh@353 -- # local d=2 00:05:05.500 06:28:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.500 06:28:57 event -- scripts/common.sh@355 -- # echo 2 00:05:05.500 06:28:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.500 06:28:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.500 06:28:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.500 06:28:57 event -- scripts/common.sh@368 -- # return 0 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.500 --rc genhtml_branch_coverage=1 00:05:05.500 --rc genhtml_function_coverage=1 00:05:05.500 --rc genhtml_legend=1 00:05:05.500 --rc geninfo_all_blocks=1 00:05:05.500 --rc geninfo_unexecuted_blocks=1 00:05:05.500 00:05:05.500 ' 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.500 --rc genhtml_branch_coverage=1 00:05:05.500 --rc genhtml_function_coverage=1 00:05:05.500 --rc genhtml_legend=1 00:05:05.500 --rc geninfo_all_blocks=1 00:05:05.500 --rc geninfo_unexecuted_blocks=1 00:05:05.500 00:05:05.500 ' 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.500 --rc genhtml_branch_coverage=1 00:05:05.500 --rc genhtml_function_coverage=1 00:05:05.500 --rc genhtml_legend=1 00:05:05.500 --rc geninfo_all_blocks=1 00:05:05.500 --rc geninfo_unexecuted_blocks=1 00:05:05.500 00:05:05.500 ' 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.500 --rc genhtml_branch_coverage=1 00:05:05.500 --rc genhtml_function_coverage=1 00:05:05.500 --rc genhtml_legend=1 00:05:05.500 --rc geninfo_all_blocks=1 00:05:05.500 --rc geninfo_unexecuted_blocks=1 00:05:05.500 00:05:05.500 ' 00:05:05.500 06:28:57 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:05.500 06:28:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.500 06:28:57 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:05.500 06:28:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.500 06:28:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.500 ************************************ 00:05:05.500 START TEST event_perf 00:05:05.500 ************************************ 00:05:05.500 06:28:57 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.500 Running I/O for 1 seconds...[2024-11-19 06:28:57.331607] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:05.500 [2024-11-19 06:28:57.332165] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58254 ] 00:05:05.759 [2024-11-19 06:28:57.507255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.759 [2024-11-19 06:28:57.603506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.759 [2024-11-19 06:28:57.603706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.759 Running I/O for 1 seconds...[2024-11-19 06:28:57.604002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.759 [2024-11-19 06:28:57.604029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.133 00:05:07.133 lcore 0: 151359 00:05:07.133 lcore 1: 151361 00:05:07.133 lcore 2: 151361 00:05:07.133 lcore 3: 151363 00:05:07.133 done. 00:05:07.133 ************************************ 00:05:07.133 END TEST event_perf 00:05:07.133 ************************************ 00:05:07.133 00:05:07.133 real 0m1.452s 00:05:07.133 user 0m4.231s 00:05:07.133 sys 0m0.100s 00:05:07.133 06:28:58 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.133 06:28:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.133 06:28:58 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:07.133 06:28:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:07.133 06:28:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.133 06:28:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.133 ************************************ 00:05:07.133 START TEST event_reactor 00:05:07.133 ************************************ 00:05:07.133 06:28:58 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:07.133 [2024-11-19 06:28:58.821372] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:07.133 [2024-11-19 06:28:58.821599] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58288 ] 00:05:07.133 [2024-11-19 06:28:58.974145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.133 [2024-11-19 06:28:59.063574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.514 test_start 00:05:08.514 oneshot 00:05:08.514 tick 100 00:05:08.514 tick 100 00:05:08.514 tick 250 00:05:08.514 tick 100 00:05:08.514 tick 100 00:05:08.514 tick 100 00:05:08.514 tick 250 00:05:08.514 tick 500 00:05:08.514 tick 100 00:05:08.514 tick 100 00:05:08.514 tick 250 00:05:08.514 tick 100 00:05:08.514 tick 100 00:05:08.514 test_end 00:05:08.514 ************************************ 00:05:08.514 END TEST event_reactor 00:05:08.514 ************************************ 00:05:08.514 00:05:08.514 real 0m1.406s 00:05:08.514 user 0m1.229s 00:05:08.514 sys 0m0.069s 00:05:08.514 06:29:00 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.514 06:29:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:08.514 06:29:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.514 06:29:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:08.514 06:29:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.514 06:29:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.514 ************************************ 00:05:08.514 START TEST event_reactor_perf 00:05:08.514 ************************************ 00:05:08.514 06:29:00 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.514 [2024-11-19 06:29:00.270975] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:08.514 [2024-11-19 06:29:00.271060] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58324 ] 00:05:08.514 [2024-11-19 06:29:00.421942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.773 [2024-11-19 06:29:00.512649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.708 test_start 00:05:09.708 test_end 00:05:09.708 Performance: 415482 events per second 00:05:09.708 ************************************ 00:05:09.708 END TEST event_reactor_perf 00:05:09.708 ************************************ 00:05:09.708 00:05:09.708 real 0m1.398s 00:05:09.708 user 0m1.222s 00:05:09.708 sys 0m0.069s 00:05:09.708 06:29:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.708 06:29:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.965 06:29:01 event -- event/event.sh@49 -- # uname -s 00:05:09.965 06:29:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:09.965 06:29:01 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:09.965 06:29:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.965 06:29:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.965 06:29:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.965 ************************************ 00:05:09.965 START TEST event_scheduler 00:05:09.965 ************************************ 00:05:09.965 06:29:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:09.966 * Looking for test storage... 00:05:09.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.966 06:29:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.966 --rc genhtml_branch_coverage=1 00:05:09.966 --rc genhtml_function_coverage=1 00:05:09.966 --rc genhtml_legend=1 00:05:09.966 --rc geninfo_all_blocks=1 00:05:09.966 --rc geninfo_unexecuted_blocks=1 00:05:09.966 00:05:09.966 ' 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.966 --rc genhtml_branch_coverage=1 00:05:09.966 --rc genhtml_function_coverage=1 00:05:09.966 --rc genhtml_legend=1 00:05:09.966 --rc geninfo_all_blocks=1 00:05:09.966 --rc geninfo_unexecuted_blocks=1 00:05:09.966 00:05:09.966 ' 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.966 --rc genhtml_branch_coverage=1 00:05:09.966 --rc genhtml_function_coverage=1 00:05:09.966 --rc genhtml_legend=1 00:05:09.966 --rc geninfo_all_blocks=1 00:05:09.966 --rc geninfo_unexecuted_blocks=1 00:05:09.966 00:05:09.966 ' 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.966 --rc genhtml_branch_coverage=1 00:05:09.966 --rc genhtml_function_coverage=1 00:05:09.966 --rc genhtml_legend=1 00:05:09.966 --rc geninfo_all_blocks=1 00:05:09.966 --rc geninfo_unexecuted_blocks=1 00:05:09.966 00:05:09.966 ' 00:05:09.966 06:29:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:09.966 06:29:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58395 00:05:09.966 06:29:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.966 06:29:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58395 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58395 ']' 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.966 06:29:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.966 06:29:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.224 [2024-11-19 06:29:01.902067] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:10.224 [2024-11-19 06:29:01.902341] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58395 ] 00:05:10.224 [2024-11-19 06:29:02.060801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.483 [2024-11-19 06:29:02.178280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.483 [2024-11-19 06:29:02.178625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.483 [2024-11-19 06:29:02.179004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.483 [2024-11-19 06:29:02.179028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.051 06:29:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.051 06:29:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:11.051 06:29:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:11.051 06:29:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.051 06:29:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.051 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.051 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.051 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.051 POWER: Cannot set governor of lcore 0 to performance 00:05:11.051 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.051 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.051 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.051 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.051 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:11.051 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:11.051 POWER: Unable to set Power Management Environment for lcore 0 00:05:11.051 [2024-11-19 06:29:02.752604] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:11.051 [2024-11-19 06:29:02.752628] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:11.051 [2024-11-19 06:29:02.752637] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:11.051 [2024-11-19 06:29:02.752654] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:11.051 [2024-11-19 06:29:02.752662] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:11.051 [2024-11-19 06:29:02.752671] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:11.051 06:29:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.051 06:29:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:11.051 06:29:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.051 06:29:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 [2024-11-19 06:29:02.994623] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:11.310 06:29:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.310 06:29:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:11.310 06:29:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.310 06:29:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.310 06:29:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 ************************************ 00:05:11.310 START TEST scheduler_create_thread 00:05:11.310 ************************************ 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 2 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 3 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 4 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 5 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 6 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 7 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 8 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 9 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 10 00:05:11.310 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.311 06:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.250 06:29:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.250 00:05:12.250 real 0m1.175s 00:05:12.509 ************************************ 00:05:12.509 END TEST scheduler_create_thread 00:05:12.509 ************************************ 00:05:12.509 user 0m0.012s 00:05:12.509 sys 0m0.007s 00:05:12.509 06:29:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.509 06:29:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.509 06:29:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:12.509 06:29:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58395 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58395 ']' 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58395 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58395 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:12.509 killing process with pid 58395 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58395' 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58395 00:05:12.509 06:29:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58395 00:05:12.767 [2024-11-19 06:29:04.660126] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.395 00:05:13.395 real 0m3.569s 00:05:13.395 user 0m5.738s 00:05:13.395 sys 0m0.392s 00:05:13.395 06:29:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.395 06:29:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.395 ************************************ 00:05:13.395 END TEST event_scheduler 00:05:13.395 ************************************ 00:05:13.654 06:29:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:13.654 06:29:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:13.654 06:29:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.654 06:29:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.654 06:29:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.654 ************************************ 00:05:13.654 START TEST app_repeat 00:05:13.654 ************************************ 00:05:13.654 06:29:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58484 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.654 Process app_repeat pid: 58484 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58484' 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.654 spdk_app_start Round 0 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58484 /var/tmp/spdk-nbd.sock 00:05:13.654 06:29:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58484 ']' 00:05:13.654 06:29:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:13.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.654 06:29:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.654 06:29:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.654 06:29:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.654 06:29:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.654 06:29:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.654 [2024-11-19 06:29:05.374007] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:13.654 [2024-11-19 06:29:05.374161] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58484 ] 00:05:13.654 [2024-11-19 06:29:05.546445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.912 [2024-11-19 06:29:05.644637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.912 [2024-11-19 06:29:05.644698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.478 06:29:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.478 06:29:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.478 06:29:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.737 Malloc0 00:05:14.737 06:29:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.995 Malloc1 00:05:14.995 06:29:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.995 06:29:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.995 06:29:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.995 06:29:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.995 06:29:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.995 06:29:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.996 06:29:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.254 /dev/nbd0 00:05:15.255 06:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.255 06:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.255 1+0 records in 00:05:15.255 1+0 records out 00:05:15.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022601 s, 18.1 MB/s 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.255 06:29:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.255 06:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.255 06:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.255 06:29:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.513 /dev/nbd1 00:05:15.513 06:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.513 06:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.513 1+0 records in 00:05:15.513 1+0 records out 00:05:15.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222572 s, 18.4 MB/s 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.513 06:29:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.513 06:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.513 06:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.513 06:29:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.514 06:29:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.514 06:29:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.514 06:29:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.514 { 00:05:15.514 "nbd_device": "/dev/nbd0", 00:05:15.514 "bdev_name": "Malloc0" 00:05:15.514 }, 00:05:15.514 { 00:05:15.514 "nbd_device": "/dev/nbd1", 00:05:15.514 "bdev_name": "Malloc1" 00:05:15.514 } 00:05:15.514 ]' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.772 { 00:05:15.772 "nbd_device": "/dev/nbd0", 00:05:15.772 "bdev_name": "Malloc0" 00:05:15.772 }, 00:05:15.772 { 00:05:15.772 "nbd_device": "/dev/nbd1", 00:05:15.772 "bdev_name": "Malloc1" 00:05:15.772 } 00:05:15.772 ]' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.772 /dev/nbd1' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.772 /dev/nbd1' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.772 256+0 records in 00:05:15.772 256+0 records out 00:05:15.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00629364 s, 167 MB/s 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.772 256+0 records in 00:05:15.772 256+0 records out 00:05:15.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208861 s, 50.2 MB/s 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.772 256+0 records in 00:05:15.772 256+0 records out 00:05:15.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191218 s, 54.8 MB/s 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.772 06:29:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.030 06:29:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.288 06:29:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.288 06:29:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.546 06:29:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.479 [2024-11-19 06:29:09.045379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.480 [2024-11-19 06:29:09.123890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.480 [2024-11-19 06:29:09.123885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.480 [2024-11-19 06:29:09.232610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.480 [2024-11-19 06:29:09.232662] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.010 spdk_app_start Round 1 00:05:20.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.010 06:29:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.010 06:29:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:20.010 06:29:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58484 /var/tmp/spdk-nbd.sock 00:05:20.010 06:29:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58484 ']' 00:05:20.010 06:29:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.010 06:29:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.010 06:29:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.010 06:29:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.010 06:29:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.010 06:29:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.010 06:29:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.010 06:29:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.010 Malloc0 00:05:20.010 06:29:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.268 Malloc1 00:05:20.268 06:29:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.268 06:29:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.526 /dev/nbd0 00:05:20.526 06:29:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.526 06:29:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.526 06:29:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.526 06:29:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.526 06:29:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.526 06:29:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.526 06:29:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.526 06:29:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.526 06:29:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.526 06:29:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.526 06:29:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.527 1+0 records in 00:05:20.527 1+0 records out 00:05:20.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287318 s, 14.3 MB/s 00:05:20.527 06:29:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.527 06:29:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.527 06:29:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.527 06:29:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.527 06:29:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.527 06:29:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.527 06:29:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.527 06:29:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.876 /dev/nbd1 00:05:20.876 06:29:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.876 06:29:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.876 1+0 records in 00:05:20.876 1+0 records out 00:05:20.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181702 s, 22.5 MB/s 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.876 06:29:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.876 06:29:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.876 06:29:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.876 06:29:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.876 06:29:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.876 06:29:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.139 { 00:05:21.139 "nbd_device": "/dev/nbd0", 00:05:21.139 "bdev_name": "Malloc0" 00:05:21.139 }, 00:05:21.139 { 00:05:21.139 "nbd_device": "/dev/nbd1", 00:05:21.139 "bdev_name": "Malloc1" 00:05:21.139 } 00:05:21.139 ]' 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.139 { 00:05:21.139 "nbd_device": "/dev/nbd0", 00:05:21.139 "bdev_name": "Malloc0" 00:05:21.139 }, 00:05:21.139 { 00:05:21.139 "nbd_device": "/dev/nbd1", 00:05:21.139 "bdev_name": "Malloc1" 00:05:21.139 } 00:05:21.139 ]' 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.139 /dev/nbd1' 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.139 /dev/nbd1' 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.139 06:29:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.140 256+0 records in 00:05:21.140 256+0 records out 00:05:21.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00958948 s, 109 MB/s 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.140 256+0 records in 00:05:21.140 256+0 records out 00:05:21.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156998 s, 66.8 MB/s 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.140 256+0 records in 00:05:21.140 256+0 records out 00:05:21.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202424 s, 51.8 MB/s 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.140 06:29:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.398 06:29:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.656 06:29:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.914 06:29:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.914 06:29:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.914 06:29:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.914 06:29:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.914 06:29:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.914 06:29:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.915 06:29:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.915 06:29:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.915 06:29:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.915 06:29:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.173 06:29:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.738 [2024-11-19 06:29:14.467341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.738 [2024-11-19 06:29:14.549106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.738 [2024-11-19 06:29:14.549212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.738 [2024-11-19 06:29:14.656997] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.738 [2024-11-19 06:29:14.657054] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.264 06:29:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.264 spdk_app_start Round 2 00:05:25.264 06:29:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:25.264 06:29:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58484 /var/tmp/spdk-nbd.sock 00:05:25.264 06:29:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58484 ']' 00:05:25.264 06:29:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.264 06:29:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.264 06:29:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.264 06:29:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.264 06:29:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.264 06:29:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.264 06:29:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.264 06:29:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.521 Malloc0 00:05:25.521 06:29:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.780 Malloc1 00:05:25.780 06:29:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.780 06:29:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.039 /dev/nbd0 00:05:26.039 06:29:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.039 06:29:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.039 1+0 records in 00:05:26.039 1+0 records out 00:05:26.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334755 s, 12.2 MB/s 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.039 06:29:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.039 06:29:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.039 06:29:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.039 06:29:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.297 /dev/nbd1 00:05:26.297 06:29:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.297 06:29:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.297 1+0 records in 00:05:26.297 1+0 records out 00:05:26.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347675 s, 11.8 MB/s 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.297 06:29:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.297 06:29:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.297 06:29:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.297 06:29:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.297 06:29:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.297 06:29:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.555 { 00:05:26.555 "nbd_device": "/dev/nbd0", 00:05:26.555 "bdev_name": "Malloc0" 00:05:26.555 }, 00:05:26.555 { 00:05:26.555 "nbd_device": "/dev/nbd1", 00:05:26.555 "bdev_name": "Malloc1" 00:05:26.555 } 00:05:26.555 ]' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.555 { 00:05:26.555 "nbd_device": "/dev/nbd0", 00:05:26.555 "bdev_name": "Malloc0" 00:05:26.555 }, 00:05:26.555 { 00:05:26.555 "nbd_device": "/dev/nbd1", 00:05:26.555 "bdev_name": "Malloc1" 00:05:26.555 } 00:05:26.555 ]' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.555 /dev/nbd1' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.555 /dev/nbd1' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.555 256+0 records in 00:05:26.555 256+0 records out 00:05:26.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671547 s, 156 MB/s 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.555 256+0 records in 00:05:26.555 256+0 records out 00:05:26.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174825 s, 60.0 MB/s 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.555 256+0 records in 00:05:26.555 256+0 records out 00:05:26.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198356 s, 52.9 MB/s 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.555 06:29:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.813 06:29:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.070 06:29:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.328 06:29:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.328 06:29:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.586 06:29:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.519 [2024-11-19 06:29:20.215472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.519 [2024-11-19 06:29:20.329311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.519 [2024-11-19 06:29:20.329399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.854 [2024-11-19 06:29:20.470157] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.854 [2024-11-19 06:29:20.470238] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.788 06:29:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58484 /var/tmp/spdk-nbd.sock 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58484 ']' 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.788 06:29:22 event.app_repeat -- event/event.sh@39 -- # killprocess 58484 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58484 ']' 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58484 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58484 00:05:30.788 killing process with pid 58484 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58484' 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58484 00:05:30.788 06:29:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58484 00:05:31.356 spdk_app_start is called in Round 0. 00:05:31.356 Shutdown signal received, stop current app iteration 00:05:31.356 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:31.356 spdk_app_start is called in Round 1. 00:05:31.356 Shutdown signal received, stop current app iteration 00:05:31.356 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:31.356 spdk_app_start is called in Round 2. 00:05:31.356 Shutdown signal received, stop current app iteration 00:05:31.356 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:31.356 spdk_app_start is called in Round 3. 00:05:31.356 Shutdown signal received, stop current app iteration 00:05:31.356 06:29:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.356 06:29:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:31.356 ************************************ 00:05:31.356 END TEST app_repeat 00:05:31.356 ************************************ 00:05:31.356 00:05:31.356 real 0m17.925s 00:05:31.356 user 0m39.207s 00:05:31.356 sys 0m2.131s 00:05:31.356 06:29:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.356 06:29:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.356 06:29:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:31.356 06:29:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.356 06:29:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.356 06:29:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.356 06:29:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.615 ************************************ 00:05:31.615 START TEST cpu_locks 00:05:31.615 ************************************ 00:05:31.615 06:29:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.615 * Looking for test storage... 00:05:31.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.615 06:29:23 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.615 06:29:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.615 06:29:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.615 06:29:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.615 06:29:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:31.615 06:29:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.615 06:29:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.615 --rc genhtml_branch_coverage=1 00:05:31.615 --rc genhtml_function_coverage=1 00:05:31.615 --rc genhtml_legend=1 00:05:31.615 --rc geninfo_all_blocks=1 00:05:31.615 --rc geninfo_unexecuted_blocks=1 00:05:31.615 00:05:31.615 ' 00:05:31.615 06:29:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.615 --rc genhtml_branch_coverage=1 00:05:31.615 --rc genhtml_function_coverage=1 00:05:31.616 --rc genhtml_legend=1 00:05:31.616 --rc geninfo_all_blocks=1 00:05:31.616 --rc geninfo_unexecuted_blocks=1 00:05:31.616 00:05:31.616 ' 00:05:31.616 06:29:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.616 --rc genhtml_branch_coverage=1 00:05:31.616 --rc genhtml_function_coverage=1 00:05:31.616 --rc genhtml_legend=1 00:05:31.616 --rc geninfo_all_blocks=1 00:05:31.616 --rc geninfo_unexecuted_blocks=1 00:05:31.616 00:05:31.616 ' 00:05:31.616 06:29:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.616 --rc genhtml_branch_coverage=1 00:05:31.616 --rc genhtml_function_coverage=1 00:05:31.616 --rc genhtml_legend=1 00:05:31.616 --rc geninfo_all_blocks=1 00:05:31.616 --rc geninfo_unexecuted_blocks=1 00:05:31.616 00:05:31.616 ' 00:05:31.616 06:29:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.616 06:29:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.616 06:29:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.616 06:29:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.616 06:29:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.616 06:29:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.616 06:29:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.616 ************************************ 00:05:31.616 START TEST default_locks 00:05:31.616 ************************************ 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58916 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58916 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58916 ']' 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.616 06:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.616 [2024-11-19 06:29:23.492117] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:31.616 [2024-11-19 06:29:23.492234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58916 ] 00:05:31.874 [2024-11-19 06:29:23.647996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.874 [2024-11-19 06:29:23.767390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58916 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58916 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58916 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58916 ']' 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58916 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58916 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.808 killing process with pid 58916 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58916' 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58916 00:05:32.808 06:29:24 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58916 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58916 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58916 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58916 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58916 ']' 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.183 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58916) - No such process 00:05:34.183 ERROR: process (pid: 58916) is no longer running 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.183 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.184 06:29:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:34.184 06:29:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.184 06:29:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.184 06:29:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.184 00:05:34.184 real 0m2.638s 00:05:34.184 user 0m2.567s 00:05:34.184 sys 0m0.499s 00:05:34.184 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.184 06:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.184 ************************************ 00:05:34.184 END TEST default_locks 00:05:34.184 ************************************ 00:05:34.184 06:29:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:34.184 06:29:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.184 06:29:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.184 06:29:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.184 ************************************ 00:05:34.184 START TEST default_locks_via_rpc 00:05:34.184 ************************************ 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58980 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58980 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58980 ']' 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.184 06:29:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.443 [2024-11-19 06:29:26.179570] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:34.443 [2024-11-19 06:29:26.179678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58980 ] 00:05:34.443 [2024-11-19 06:29:26.340023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.703 [2024-11-19 06:29:26.436521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58980 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58980 00:05:35.269 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58980 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58980 ']' 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58980 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58980 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.527 killing process with pid 58980 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58980' 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58980 00:05:35.527 06:29:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58980 00:05:36.903 00:05:36.903 real 0m2.373s 00:05:36.903 user 0m2.352s 00:05:36.903 sys 0m0.468s 00:05:36.903 06:29:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.903 06:29:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.903 ************************************ 00:05:36.903 END TEST default_locks_via_rpc 00:05:36.903 ************************************ 00:05:36.903 06:29:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:36.903 06:29:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.903 06:29:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.903 06:29:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.903 ************************************ 00:05:36.903 START TEST non_locking_app_on_locked_coremask 00:05:36.903 ************************************ 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59032 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59032 /var/tmp/spdk.sock 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59032 ']' 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.903 06:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.903 [2024-11-19 06:29:28.604778] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:36.903 [2024-11-19 06:29:28.604892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:05:36.903 [2024-11-19 06:29:28.760089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.183 [2024-11-19 06:29:28.862241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59048 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59048 /var/tmp/spdk2.sock 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59048 ']' 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.770 06:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.770 [2024-11-19 06:29:29.525262] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:37.770 [2024-11-19 06:29:29.525388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59048 ] 00:05:37.770 [2024-11-19 06:29:29.689891] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.770 [2024-11-19 06:29:29.689962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.028 [2024-11-19 06:29:29.890886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.403 06:29:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.403 06:29:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.403 06:29:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59032 00:05:39.403 06:29:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59032 00:05:39.403 06:29:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59032 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59032 ']' 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59032 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59032 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.403 killing process with pid 59032 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59032' 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59032 00:05:39.403 06:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59032 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59048 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59048 ']' 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59048 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59048 00:05:41.951 killing process with pid 59048 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59048' 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59048 00:05:41.951 06:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59048 00:05:43.331 00:05:43.331 real 0m6.480s 00:05:43.331 user 0m6.720s 00:05:43.331 sys 0m0.930s 00:05:43.331 06:29:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.331 ************************************ 00:05:43.331 END TEST non_locking_app_on_locked_coremask 00:05:43.331 ************************************ 00:05:43.331 06:29:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.331 06:29:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:43.331 06:29:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.331 06:29:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.331 06:29:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.331 ************************************ 00:05:43.331 START TEST locking_app_on_unlocked_coremask 00:05:43.331 ************************************ 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59145 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59145 /var/tmp/spdk.sock 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59145 ']' 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.331 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.331 [2024-11-19 06:29:35.120363] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:43.331 [2024-11-19 06:29:35.120479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59145 ] 00:05:43.589 [2024-11-19 06:29:35.274895] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.589 [2024-11-19 06:29:35.274946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.589 [2024-11-19 06:29:35.369798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59155 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59155 /var/tmp/spdk2.sock 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59155 ']' 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.162 06:29:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.162 [2024-11-19 06:29:36.060675] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:44.162 [2024-11-19 06:29:36.061089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59155 ] 00:05:44.424 [2024-11-19 06:29:36.243368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.685 [2024-11-19 06:29:36.451662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.627 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.627 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.627 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59155 00:05:45.627 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59155 00:05:45.627 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59145 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59145 ']' 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59145 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59145 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.887 killing process with pid 59145 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59145' 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59145 00:05:45.887 06:29:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59145 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59155 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59155 ']' 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59155 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59155 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.465 killing process with pid 59155 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59155' 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59155 00:05:48.465 06:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59155 00:05:49.841 00:05:49.841 real 0m6.554s 00:05:49.841 user 0m6.773s 00:05:49.841 sys 0m0.923s 00:05:49.841 06:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.841 06:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.841 ************************************ 00:05:49.841 END TEST locking_app_on_unlocked_coremask 00:05:49.841 ************************************ 00:05:49.841 06:29:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:49.841 06:29:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.842 06:29:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.842 06:29:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.842 ************************************ 00:05:49.842 START TEST locking_app_on_locked_coremask 00:05:49.842 ************************************ 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59257 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59257 /var/tmp/spdk.sock 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59257 ']' 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.842 06:29:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.842 [2024-11-19 06:29:41.710884] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:49.842 [2024-11-19 06:29:41.711017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59257 ] 00:05:50.100 [2024-11-19 06:29:41.871074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.100 [2024-11-19 06:29:41.987011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.035 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.035 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.035 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59273 00:05:51.035 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59273 /var/tmp/spdk2.sock 00:05:51.035 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59273 /var/tmp/spdk2.sock 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59273 /var/tmp/spdk2.sock 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59273 ']' 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.036 06:29:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.036 [2024-11-19 06:29:42.709890] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:51.036 [2024-11-19 06:29:42.710026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59273 ] 00:05:51.036 [2024-11-19 06:29:42.884324] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59257 has claimed it. 00:05:51.036 [2024-11-19 06:29:42.884389] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.603 ERROR: process (pid: 59273) is no longer running 00:05:51.603 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59273) - No such process 00:05:51.603 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.603 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:51.603 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:51.603 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.603 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.603 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.603 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59257 00:05:51.603 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59257 00:05:51.603 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59257 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59257 ']' 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59257 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59257 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.862 killing process with pid 59257 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59257' 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59257 00:05:51.862 06:29:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59257 00:05:53.241 00:05:53.241 real 0m3.232s 00:05:53.241 user 0m3.466s 00:05:53.241 sys 0m0.620s 00:05:53.241 06:29:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.241 06:29:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.241 ************************************ 00:05:53.241 END TEST locking_app_on_locked_coremask 00:05:53.241 ************************************ 00:05:53.241 06:29:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:53.241 06:29:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.241 06:29:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.242 06:29:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.242 ************************************ 00:05:53.242 START TEST locking_overlapped_coremask 00:05:53.242 ************************************ 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59326 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59326 /var/tmp/spdk.sock 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59326 ']' 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:53.242 06:29:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.242 [2024-11-19 06:29:45.008342] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:53.242 [2024-11-19 06:29:45.008456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59326 ] 00:05:53.242 [2024-11-19 06:29:45.159301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.500 [2024-11-19 06:29:45.251439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.500 [2024-11-19 06:29:45.251562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.500 [2024-11-19 06:29:45.251571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59344 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59344 /var/tmp/spdk2.sock 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59344 /var/tmp/spdk2.sock 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:54.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59344 /var/tmp/spdk2.sock 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59344 ']' 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.066 06:29:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.066 [2024-11-19 06:29:45.857824] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:54.066 [2024-11-19 06:29:45.858275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59344 ] 00:05:54.324 [2024-11-19 06:29:46.022838] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59326 has claimed it. 00:05:54.324 [2024-11-19 06:29:46.022895] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.586 ERROR: process (pid: 59344) is no longer running 00:05:54.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59344) - No such process 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59326 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59326 ']' 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59326 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.586 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59326 00:05:54.845 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.845 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.845 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59326' 00:05:54.845 killing process with pid 59326 00:05:54.845 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59326 00:05:54.845 06:29:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59326 00:05:56.220 00:05:56.220 real 0m2.841s 00:05:56.220 user 0m7.618s 00:05:56.220 sys 0m0.461s 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.220 ************************************ 00:05:56.220 END TEST locking_overlapped_coremask 00:05:56.220 ************************************ 00:05:56.220 06:29:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:56.220 06:29:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.220 06:29:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.220 06:29:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.220 ************************************ 00:05:56.220 START TEST locking_overlapped_coremask_via_rpc 00:05:56.220 ************************************ 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59397 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59397 /var/tmp/spdk.sock 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59397 ']' 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.220 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 06:29:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:56.221 [2024-11-19 06:29:47.887354] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:56.221 [2024-11-19 06:29:47.887668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59397 ] 00:05:56.221 [2024-11-19 06:29:48.035042] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.221 [2024-11-19 06:29:48.035087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.221 [2024-11-19 06:29:48.127581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.221 [2024-11-19 06:29:48.127778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.221 [2024-11-19 06:29:48.127780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.787 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.787 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:56.787 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:56.787 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59415 00:05:56.788 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59415 /var/tmp/spdk2.sock 00:05:56.788 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59415 ']' 00:05:56.788 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.788 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.788 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.788 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.788 06:29:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.788 [2024-11-19 06:29:48.707012] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:56.788 [2024-11-19 06:29:48.707529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59415 ] 00:05:57.045 [2024-11-19 06:29:48.865878] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.045 [2024-11-19 06:29:48.865929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.304 [2024-11-19 06:29:49.071227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.304 [2024-11-19 06:29:49.071283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.304 [2024-11-19 06:29:49.071321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.237 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 [2024-11-19 06:29:50.128063] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59397 has claimed it. 00:05:58.237 request: 00:05:58.237 { 00:05:58.237 "method": "framework_enable_cpumask_locks", 00:05:58.237 "req_id": 1 00:05:58.237 } 00:05:58.237 Got JSON-RPC error response 00:05:58.237 response: 00:05:58.237 { 00:05:58.237 "code": -32603, 00:05:58.238 "message": "Failed to claim CPU core: 2" 00:05:58.238 } 00:05:58.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59397 /var/tmp/spdk.sock 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59397 ']' 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.238 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.496 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.496 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.496 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59415 /var/tmp/spdk2.sock 00:05:58.496 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59415 ']' 00:05:58.496 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.497 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.497 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.497 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.497 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.755 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.755 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.755 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.755 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.755 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.755 ************************************ 00:05:58.755 END TEST locking_overlapped_coremask_via_rpc 00:05:58.755 ************************************ 00:05:58.755 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.755 00:05:58.755 real 0m2.707s 00:05:58.755 user 0m0.946s 00:05:58.755 sys 0m0.126s 00:05:58.755 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.755 06:29:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.755 06:29:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.755 06:29:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59397 ]] 00:05:58.755 06:29:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59397 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59397 ']' 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59397 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59397 00:05:58.755 killing process with pid 59397 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59397' 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59397 00:05:58.755 06:29:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59397 00:06:00.127 06:29:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59415 ]] 00:06:00.127 06:29:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59415 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59415 ']' 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59415 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59415 00:06:00.127 killing process with pid 59415 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59415' 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59415 00:06:00.127 06:29:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59415 00:06:01.501 06:29:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.501 06:29:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.501 06:29:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59397 ]] 00:06:01.501 06:29:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59397 00:06:01.501 06:29:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59397 ']' 00:06:01.501 06:29:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59397 00:06:01.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59397) - No such process 00:06:01.501 Process with pid 59397 is not found 00:06:01.501 06:29:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59397 is not found' 00:06:01.501 06:29:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59415 ]] 00:06:01.501 Process with pid 59415 is not found 00:06:01.501 06:29:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59415 00:06:01.501 06:29:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59415 ']' 00:06:01.501 06:29:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59415 00:06:01.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59415) - No such process 00:06:01.501 06:29:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59415 is not found' 00:06:01.501 06:29:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.501 00:06:01.501 real 0m29.811s 00:06:01.501 user 0m50.114s 00:06:01.501 sys 0m4.894s 00:06:01.501 06:29:53 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.501 ************************************ 00:06:01.501 END TEST cpu_locks 00:06:01.501 ************************************ 00:06:01.501 06:29:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.501 ************************************ 00:06:01.501 END TEST event 00:06:01.501 ************************************ 00:06:01.501 00:06:01.501 real 0m55.983s 00:06:01.501 user 1m41.892s 00:06:01.501 sys 0m7.904s 00:06:01.501 06:29:53 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.501 06:29:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.501 06:29:53 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:01.501 06:29:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.501 06:29:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.501 06:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:01.501 ************************************ 00:06:01.501 START TEST thread 00:06:01.501 ************************************ 00:06:01.501 06:29:53 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:01.501 * Looking for test storage... 00:06:01.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:01.501 06:29:53 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.501 06:29:53 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.501 06:29:53 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.501 06:29:53 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.501 06:29:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.501 06:29:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.501 06:29:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.501 06:29:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.501 06:29:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.501 06:29:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.501 06:29:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.501 06:29:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.501 06:29:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.501 06:29:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.501 06:29:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.501 06:29:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:01.501 06:29:53 thread -- scripts/common.sh@345 -- # : 1 00:06:01.501 06:29:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.501 06:29:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.501 06:29:53 thread -- scripts/common.sh@365 -- # decimal 1 00:06:01.501 06:29:53 thread -- scripts/common.sh@353 -- # local d=1 00:06:01.501 06:29:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.501 06:29:53 thread -- scripts/common.sh@355 -- # echo 1 00:06:01.501 06:29:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.501 06:29:53 thread -- scripts/common.sh@366 -- # decimal 2 00:06:01.501 06:29:53 thread -- scripts/common.sh@353 -- # local d=2 00:06:01.502 06:29:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.502 06:29:53 thread -- scripts/common.sh@355 -- # echo 2 00:06:01.502 06:29:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.502 06:29:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.502 06:29:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.502 06:29:53 thread -- scripts/common.sh@368 -- # return 0 00:06:01.502 06:29:53 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.502 06:29:53 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.502 --rc genhtml_branch_coverage=1 00:06:01.502 --rc genhtml_function_coverage=1 00:06:01.502 --rc genhtml_legend=1 00:06:01.502 --rc geninfo_all_blocks=1 00:06:01.502 --rc geninfo_unexecuted_blocks=1 00:06:01.502 00:06:01.502 ' 00:06:01.502 06:29:53 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.502 --rc genhtml_branch_coverage=1 00:06:01.502 --rc genhtml_function_coverage=1 00:06:01.502 --rc genhtml_legend=1 00:06:01.502 --rc geninfo_all_blocks=1 00:06:01.502 --rc geninfo_unexecuted_blocks=1 00:06:01.502 00:06:01.502 ' 00:06:01.502 06:29:53 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.502 --rc genhtml_branch_coverage=1 00:06:01.502 --rc genhtml_function_coverage=1 00:06:01.502 --rc genhtml_legend=1 00:06:01.502 --rc geninfo_all_blocks=1 00:06:01.502 --rc geninfo_unexecuted_blocks=1 00:06:01.502 00:06:01.502 ' 00:06:01.502 06:29:53 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.502 --rc genhtml_branch_coverage=1 00:06:01.502 --rc genhtml_function_coverage=1 00:06:01.502 --rc genhtml_legend=1 00:06:01.502 --rc geninfo_all_blocks=1 00:06:01.502 --rc geninfo_unexecuted_blocks=1 00:06:01.502 00:06:01.502 ' 00:06:01.502 06:29:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.502 06:29:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:01.502 06:29:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.502 06:29:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.502 ************************************ 00:06:01.502 START TEST thread_poller_perf 00:06:01.502 ************************************ 00:06:01.502 06:29:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.502 [2024-11-19 06:29:53.343499] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:01.502 [2024-11-19 06:29:53.343713] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59564 ] 00:06:01.760 [2024-11-19 06:29:53.498969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.760 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:01.760 [2024-11-19 06:29:53.606192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.134 [2024-11-19T06:29:55.063Z] ====================================== 00:06:03.134 [2024-11-19T06:29:55.063Z] busy:2610705654 (cyc) 00:06:03.134 [2024-11-19T06:29:55.063Z] total_run_count: 304000 00:06:03.134 [2024-11-19T06:29:55.063Z] tsc_hz: 2600000000 (cyc) 00:06:03.134 [2024-11-19T06:29:55.063Z] ====================================== 00:06:03.134 [2024-11-19T06:29:55.063Z] poller_cost: 8587 (cyc), 3302 (nsec) 00:06:03.134 ************************************ 00:06:03.134 END TEST thread_poller_perf 00:06:03.134 ************************************ 00:06:03.134 00:06:03.134 real 0m1.467s 00:06:03.134 user 0m1.284s 00:06:03.134 sys 0m0.074s 00:06:03.134 06:29:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.134 06:29:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.134 06:29:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.134 06:29:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:03.134 06:29:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.134 06:29:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.134 ************************************ 00:06:03.134 START TEST thread_poller_perf 00:06:03.134 ************************************ 00:06:03.134 06:29:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.134 [2024-11-19 06:29:54.842899] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:03.134 [2024-11-19 06:29:54.842995] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59606 ] 00:06:03.134 [2024-11-19 06:29:55.003466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.392 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.392 [2024-11-19 06:29:55.113116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.766 [2024-11-19T06:29:56.695Z] ====================================== 00:06:04.766 [2024-11-19T06:29:56.695Z] busy:2603271386 (cyc) 00:06:04.766 [2024-11-19T06:29:56.695Z] total_run_count: 3923000 00:06:04.766 [2024-11-19T06:29:56.695Z] tsc_hz: 2600000000 (cyc) 00:06:04.766 [2024-11-19T06:29:56.695Z] ====================================== 00:06:04.766 [2024-11-19T06:29:56.695Z] poller_cost: 663 (cyc), 255 (nsec) 00:06:04.766 00:06:04.766 real 0m1.460s 00:06:04.766 user 0m1.280s 00:06:04.766 sys 0m0.072s 00:06:04.766 06:29:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.766 06:29:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.766 ************************************ 00:06:04.766 END TEST thread_poller_perf 00:06:04.766 ************************************ 00:06:04.766 06:29:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.766 ************************************ 00:06:04.766 END TEST thread 00:06:04.766 ************************************ 00:06:04.766 00:06:04.766 real 0m3.152s 00:06:04.766 user 0m2.667s 00:06:04.766 sys 0m0.259s 00:06:04.766 06:29:56 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.766 06:29:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.766 06:29:56 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:04.766 06:29:56 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:04.766 06:29:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.766 06:29:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.766 06:29:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.766 ************************************ 00:06:04.766 START TEST app_cmdline 00:06:04.766 ************************************ 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:04.766 * Looking for test storage... 00:06:04.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.766 06:29:56 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.766 --rc genhtml_branch_coverage=1 00:06:04.766 --rc genhtml_function_coverage=1 00:06:04.766 --rc genhtml_legend=1 00:06:04.766 --rc geninfo_all_blocks=1 00:06:04.766 --rc geninfo_unexecuted_blocks=1 00:06:04.766 00:06:04.766 ' 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.766 --rc genhtml_branch_coverage=1 00:06:04.766 --rc genhtml_function_coverage=1 00:06:04.766 --rc genhtml_legend=1 00:06:04.766 --rc geninfo_all_blocks=1 00:06:04.766 --rc geninfo_unexecuted_blocks=1 00:06:04.766 00:06:04.766 ' 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.766 --rc genhtml_branch_coverage=1 00:06:04.766 --rc genhtml_function_coverage=1 00:06:04.766 --rc genhtml_legend=1 00:06:04.766 --rc geninfo_all_blocks=1 00:06:04.766 --rc geninfo_unexecuted_blocks=1 00:06:04.766 00:06:04.766 ' 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.766 --rc genhtml_branch_coverage=1 00:06:04.766 --rc genhtml_function_coverage=1 00:06:04.766 --rc genhtml_legend=1 00:06:04.766 --rc geninfo_all_blocks=1 00:06:04.766 --rc geninfo_unexecuted_blocks=1 00:06:04.766 00:06:04.766 ' 00:06:04.766 06:29:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:04.766 06:29:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59690 00:06:04.766 06:29:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59690 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59690 ']' 00:06:04.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.766 06:29:56 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.766 06:29:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.766 [2024-11-19 06:29:56.608499] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:04.767 [2024-11-19 06:29:56.608636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59690 ] 00:06:05.025 [2024-11-19 06:29:56.771295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.025 [2024-11-19 06:29:56.889777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:05.966 { 00:06:05.966 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:06:05.966 "fields": { 00:06:05.966 "major": 25, 00:06:05.966 "minor": 1, 00:06:05.966 "patch": 0, 00:06:05.966 "suffix": "-pre", 00:06:05.966 "commit": "d47eb51c9" 00:06:05.966 } 00:06:05.966 } 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:05.966 06:29:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:05.966 06:29:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.225 request: 00:06:06.225 { 00:06:06.225 "method": "env_dpdk_get_mem_stats", 00:06:06.225 "req_id": 1 00:06:06.225 } 00:06:06.225 Got JSON-RPC error response 00:06:06.225 response: 00:06:06.225 { 00:06:06.225 "code": -32601, 00:06:06.225 "message": "Method not found" 00:06:06.225 } 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.225 06:29:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59690 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59690 ']' 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59690 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59690 00:06:06.225 killing process with pid 59690 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59690' 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@973 -- # kill 59690 00:06:06.225 06:29:57 app_cmdline -- common/autotest_common.sh@978 -- # wait 59690 00:06:07.599 00:06:07.599 real 0m3.080s 00:06:07.599 user 0m3.315s 00:06:07.599 sys 0m0.519s 00:06:07.599 ************************************ 00:06:07.599 END TEST app_cmdline 00:06:07.599 ************************************ 00:06:07.599 06:29:59 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.599 06:29:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.599 06:29:59 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.599 06:29:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.599 06:29:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.599 06:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:07.599 ************************************ 00:06:07.599 START TEST version 00:06:07.599 ************************************ 00:06:07.599 06:29:59 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.857 * Looking for test storage... 00:06:07.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:07.857 06:29:59 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.857 06:29:59 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.857 06:29:59 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.857 06:29:59 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.857 06:29:59 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.857 06:29:59 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.857 06:29:59 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.857 06:29:59 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.857 06:29:59 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.857 06:29:59 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.858 06:29:59 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.858 06:29:59 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.858 06:29:59 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.858 06:29:59 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.858 06:29:59 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.858 06:29:59 version -- scripts/common.sh@344 -- # case "$op" in 00:06:07.858 06:29:59 version -- scripts/common.sh@345 -- # : 1 00:06:07.858 06:29:59 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.858 06:29:59 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.858 06:29:59 version -- scripts/common.sh@365 -- # decimal 1 00:06:07.858 06:29:59 version -- scripts/common.sh@353 -- # local d=1 00:06:07.858 06:29:59 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.858 06:29:59 version -- scripts/common.sh@355 -- # echo 1 00:06:07.858 06:29:59 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.858 06:29:59 version -- scripts/common.sh@366 -- # decimal 2 00:06:07.858 06:29:59 version -- scripts/common.sh@353 -- # local d=2 00:06:07.858 06:29:59 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.858 06:29:59 version -- scripts/common.sh@355 -- # echo 2 00:06:07.858 06:29:59 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.858 06:29:59 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.858 06:29:59 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.858 06:29:59 version -- scripts/common.sh@368 -- # return 0 00:06:07.858 06:29:59 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.858 06:29:59 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.858 --rc genhtml_branch_coverage=1 00:06:07.858 --rc genhtml_function_coverage=1 00:06:07.858 --rc genhtml_legend=1 00:06:07.858 --rc geninfo_all_blocks=1 00:06:07.858 --rc geninfo_unexecuted_blocks=1 00:06:07.858 00:06:07.858 ' 00:06:07.858 06:29:59 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.858 --rc genhtml_branch_coverage=1 00:06:07.858 --rc genhtml_function_coverage=1 00:06:07.858 --rc genhtml_legend=1 00:06:07.858 --rc geninfo_all_blocks=1 00:06:07.858 --rc geninfo_unexecuted_blocks=1 00:06:07.858 00:06:07.858 ' 00:06:07.858 06:29:59 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.858 --rc genhtml_branch_coverage=1 00:06:07.858 --rc genhtml_function_coverage=1 00:06:07.858 --rc genhtml_legend=1 00:06:07.858 --rc geninfo_all_blocks=1 00:06:07.858 --rc geninfo_unexecuted_blocks=1 00:06:07.858 00:06:07.858 ' 00:06:07.858 06:29:59 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.858 --rc genhtml_branch_coverage=1 00:06:07.858 --rc genhtml_function_coverage=1 00:06:07.858 --rc genhtml_legend=1 00:06:07.858 --rc geninfo_all_blocks=1 00:06:07.858 --rc geninfo_unexecuted_blocks=1 00:06:07.858 00:06:07.858 ' 00:06:07.858 06:29:59 version -- app/version.sh@17 -- # get_header_version major 00:06:07.858 06:29:59 version -- app/version.sh@14 -- # cut -f2 00:06:07.858 06:29:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.858 06:29:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.858 06:29:59 version -- app/version.sh@17 -- # major=25 00:06:07.858 06:29:59 version -- app/version.sh@18 -- # get_header_version minor 00:06:07.858 06:29:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.858 06:29:59 version -- app/version.sh@14 -- # cut -f2 00:06:07.858 06:29:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.858 06:29:59 version -- app/version.sh@18 -- # minor=1 00:06:07.858 06:29:59 version -- app/version.sh@19 -- # get_header_version patch 00:06:07.858 06:29:59 version -- app/version.sh@14 -- # cut -f2 00:06:07.858 06:29:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.858 06:29:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.858 06:29:59 version -- app/version.sh@19 -- # patch=0 00:06:07.858 06:29:59 version -- app/version.sh@20 -- # get_header_version suffix 00:06:07.858 06:29:59 version -- app/version.sh@14 -- # cut -f2 00:06:07.858 06:29:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.858 06:29:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.858 06:29:59 version -- app/version.sh@20 -- # suffix=-pre 00:06:07.858 06:29:59 version -- app/version.sh@22 -- # version=25.1 00:06:07.858 06:29:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:07.858 06:29:59 version -- app/version.sh@28 -- # version=25.1rc0 00:06:07.858 06:29:59 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:07.858 06:29:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:07.858 06:29:59 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:07.858 06:29:59 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:07.858 00:06:07.858 real 0m0.196s 00:06:07.858 user 0m0.128s 00:06:07.858 sys 0m0.095s 00:06:07.858 ************************************ 00:06:07.858 END TEST version 00:06:07.858 ************************************ 00:06:07.858 06:29:59 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.858 06:29:59 version -- common/autotest_common.sh@10 -- # set +x 00:06:07.858 06:29:59 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:07.858 06:29:59 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:07.858 06:29:59 -- spdk/autotest.sh@194 -- # uname -s 00:06:07.858 06:29:59 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:07.858 06:29:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:07.858 06:29:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:07.858 06:29:59 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:07.858 06:29:59 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:07.858 06:29:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.858 06:29:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.858 06:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:07.858 ************************************ 00:06:07.858 START TEST blockdev_nvme 00:06:07.858 ************************************ 00:06:07.858 06:29:59 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:08.117 * Looking for test storage... 00:06:08.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:08.117 06:29:59 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.117 06:29:59 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.117 06:29:59 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.117 06:29:59 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.117 06:29:59 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:08.117 06:29:59 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.117 06:29:59 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.117 --rc genhtml_branch_coverage=1 00:06:08.117 --rc genhtml_function_coverage=1 00:06:08.117 --rc genhtml_legend=1 00:06:08.117 --rc geninfo_all_blocks=1 00:06:08.117 --rc geninfo_unexecuted_blocks=1 00:06:08.117 00:06:08.117 ' 00:06:08.117 06:29:59 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.117 --rc genhtml_branch_coverage=1 00:06:08.117 --rc genhtml_function_coverage=1 00:06:08.117 --rc genhtml_legend=1 00:06:08.117 --rc geninfo_all_blocks=1 00:06:08.117 --rc geninfo_unexecuted_blocks=1 00:06:08.117 00:06:08.117 ' 00:06:08.117 06:29:59 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.117 --rc genhtml_branch_coverage=1 00:06:08.117 --rc genhtml_function_coverage=1 00:06:08.117 --rc genhtml_legend=1 00:06:08.118 --rc geninfo_all_blocks=1 00:06:08.118 --rc geninfo_unexecuted_blocks=1 00:06:08.118 00:06:08.118 ' 00:06:08.118 06:29:59 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.118 --rc genhtml_branch_coverage=1 00:06:08.118 --rc genhtml_function_coverage=1 00:06:08.118 --rc genhtml_legend=1 00:06:08.118 --rc geninfo_all_blocks=1 00:06:08.118 --rc geninfo_unexecuted_blocks=1 00:06:08.118 00:06:08.118 ' 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:08.118 06:29:59 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:08.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59862 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59862 00:06:08.118 06:29:59 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59862 ']' 00:06:08.118 06:29:59 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.118 06:29:59 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.118 06:29:59 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.118 06:29:59 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.118 06:29:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.118 06:29:59 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:08.118 [2024-11-19 06:29:59.959217] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:08.118 [2024-11-19 06:29:59.959335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59862 ] 00:06:08.376 [2024-11-19 06:30:00.114528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.376 [2024-11-19 06:30:00.234133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.309 06:30:00 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.309 06:30:00 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:09.309 06:30:00 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:09.309 06:30:00 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:09.309 06:30:00 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:09.309 06:30:00 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:09.309 06:30:00 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:09.309 06:30:00 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:09.309 06:30:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.309 06:30:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.309 06:30:01 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.309 06:30:01 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:09.309 06:30:01 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.309 06:30:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.570 06:30:01 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "43bd1d43-b080-41ed-8f8e-7d8fd004b2c9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "43bd1d43-b080-41ed-8f8e-7d8fd004b2c9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "15123b1f-ec01-4d6f-b1b3-d7c0c11a95d5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "15123b1f-ec01-4d6f-b1b3-d7c0c11a95d5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ffe1140b-43fd-40c7-bde9-d6bd258ca271"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffe1140b-43fd-40c7-bde9-d6bd258ca271",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6c27974d-5f19-4c65-aae2-f50dfc9107ed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c27974d-5f19-4c65-aae2-f50dfc9107ed",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1c9c668e-5a71-457f-89cd-b4ccf18468ea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1c9c668e-5a71-457f-89cd-b4ccf18468ea",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d015c9f6-c649-41fa-9450-569301cf6c52"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d015c9f6-c649-41fa-9450-569301cf6c52",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:09.570 06:30:01 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:09.571 06:30:01 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59862 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59862 ']' 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59862 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59862 00:06:09.571 killing process with pid 59862 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59862' 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59862 00:06:09.571 06:30:01 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59862 00:06:11.479 06:30:02 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:11.479 06:30:02 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:11.479 06:30:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:11.479 06:30:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.479 06:30:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.479 ************************************ 00:06:11.479 START TEST bdev_hello_world 00:06:11.479 ************************************ 00:06:11.479 06:30:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:11.479 [2024-11-19 06:30:03.050161] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:11.479 [2024-11-19 06:30:03.050277] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59946 ] 00:06:11.480 [2024-11-19 06:30:03.201778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.480 [2024-11-19 06:30:03.302398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.141 [2024-11-19 06:30:03.820096] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:12.141 [2024-11-19 06:30:03.820348] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:12.141 [2024-11-19 06:30:03.820378] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:12.141 [2024-11-19 06:30:03.823023] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:12.141 [2024-11-19 06:30:03.823469] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:12.141 [2024-11-19 06:30:03.823581] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:12.141 [2024-11-19 06:30:03.823730] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:12.141 00:06:12.141 [2024-11-19 06:30:03.823754] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:12.707 ************************************ 00:06:12.707 END TEST bdev_hello_world 00:06:12.707 ************************************ 00:06:12.707 00:06:12.707 real 0m1.593s 00:06:12.707 user 0m1.288s 00:06:12.707 sys 0m0.197s 00:06:12.707 06:30:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.707 06:30:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:12.707 06:30:04 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:12.707 06:30:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.707 06:30:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.707 06:30:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.707 ************************************ 00:06:12.707 START TEST bdev_bounds 00:06:12.707 ************************************ 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59988 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59988' 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:12.707 Process bdevio pid: 59988 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59988 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59988 ']' 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.707 06:30:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:12.965 [2024-11-19 06:30:04.691638] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:12.965 [2024-11-19 06:30:04.692247] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59988 ] 00:06:12.965 [2024-11-19 06:30:04.852507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.223 [2024-11-19 06:30:04.976207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.223 [2024-11-19 06:30:04.976494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.224 [2024-11-19 06:30:04.976517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.793 06:30:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.793 06:30:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:13.793 06:30:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:13.793 I/O targets: 00:06:13.793 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:13.793 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:13.793 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:13.793 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:13.793 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:13.793 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:13.793 00:06:13.793 00:06:13.793 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.793 http://cunit.sourceforge.net/ 00:06:13.793 00:06:13.793 00:06:13.793 Suite: bdevio tests on: Nvme3n1 00:06:13.793 Test: blockdev write read block ...passed 00:06:13.793 Test: blockdev write zeroes read block ...passed 00:06:13.793 Test: blockdev write zeroes read no split ...passed 00:06:13.793 Test: blockdev write zeroes read split ...passed 00:06:13.793 Test: blockdev write zeroes read split partial ...passed 00:06:13.793 Test: blockdev reset ...[2024-11-19 06:30:05.700877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:13.793 passed 00:06:13.793 Test: blockdev write read 8 blocks ...[2024-11-19 06:30:05.704104] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:13.793 passed 00:06:13.793 Test: blockdev write read size > 128k ...passed 00:06:13.793 Test: blockdev write read invalid size ...passed 00:06:13.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.793 Test: blockdev write read max offset ...passed 00:06:13.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.793 Test: blockdev writev readv 8 blocks ...passed 00:06:13.793 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.793 Test: blockdev writev readv block ...passed 00:06:13.793 Test: blockdev writev readv size > 128k ...passed 00:06:13.793 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.793 Test: blockdev comparev and writev ...[2024-11-19 06:30:05.712533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a0a000 len:0x1000 00:06:13.793 [2024-11-19 06:30:05.712581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.793 passed 00:06:13.793 Test: blockdev nvme passthru rw ...passed 00:06:13.793 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.793 Test: blockdev nvme admin passthru ...[2024-11-19 06:30:05.713318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.793 [2024-11-19 06:30:05.713351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.793 passed 00:06:14.052 Test: blockdev copy ...passed 00:06:14.052 Suite: bdevio tests on: Nvme2n3 00:06:14.052 Test: blockdev write read block ...passed 00:06:14.052 Test: blockdev write zeroes read block ...passed 00:06:14.052 Test: blockdev write zeroes read no split ...passed 00:06:14.052 Test: blockdev write zeroes read split ...passed 00:06:14.052 Test: blockdev write zeroes read split partial ...passed 00:06:14.052 Test: blockdev reset ...[2024-11-19 06:30:05.772815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:14.052 [2024-11-19 06:30:05.777990] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:14.052 passed 00:06:14.052 Test: blockdev write read 8 blocks ...passed 00:06:14.052 Test: blockdev write read size > 128k ...passed 00:06:14.052 Test: blockdev write read invalid size ...passed 00:06:14.052 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.052 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.052 Test: blockdev write read max offset ...passed 00:06:14.052 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.052 Test: blockdev writev readv 8 blocks ...passed 00:06:14.052 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.052 Test: blockdev writev readv block ...passed 00:06:14.052 Test: blockdev writev readv size > 128k ...passed 00:06:14.052 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.052 Test: blockdev comparev and writev ...[2024-11-19 06:30:05.785825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29cc06000 len:0x1000 00:06:14.052 [2024-11-19 06:30:05.785871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.052 passed 00:06:14.052 Test: blockdev nvme passthru rw ...passed 00:06:14.052 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.052 Test: blockdev nvme admin passthru ...[2024-11-19 06:30:05.786484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.052 [2024-11-19 06:30:05.786510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.052 passed 00:06:14.052 Test: blockdev copy ...passed 00:06:14.052 Suite: bdevio tests on: Nvme2n2 00:06:14.052 Test: blockdev write read block ...passed 00:06:14.052 Test: blockdev write zeroes read block ...passed 00:06:14.052 Test: blockdev write zeroes read no split ...passed 00:06:14.052 Test: blockdev write zeroes read split ...passed 00:06:14.052 Test: blockdev write zeroes read split partial ...passed 00:06:14.052 Test: blockdev reset ...[2024-11-19 06:30:05.838401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:14.052 [2024-11-19 06:30:05.842294] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:14.052 Test: blockdev write read 8 blocks ...uccessful. 00:06:14.052 passed 00:06:14.052 Test: blockdev write read size > 128k ...passed 00:06:14.052 Test: blockdev write read invalid size ...passed 00:06:14.052 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.052 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.052 Test: blockdev write read max offset ...passed 00:06:14.052 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.052 Test: blockdev writev readv 8 blocks ...passed 00:06:14.052 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.052 Test: blockdev writev readv block ...passed 00:06:14.052 Test: blockdev writev readv size > 128k ...passed 00:06:14.052 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.052 Test: blockdev comparev and writev ...[2024-11-19 06:30:05.849910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d523c000 len:0x1000 00:06:14.052 [2024-11-19 06:30:05.850096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.052 passed 00:06:14.052 Test: blockdev nvme passthru rw ...passed 00:06:14.052 Test: blockdev nvme passthru vendor specific ...[2024-11-19 06:30:05.851112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.052 passed[2024-11-19 06:30:05.851222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.052 00:06:14.052 Test: blockdev nvme admin passthru ...passed 00:06:14.052 Test: blockdev copy ...passed 00:06:14.052 Suite: bdevio tests on: Nvme2n1 00:06:14.052 Test: blockdev write read block ...passed 00:06:14.052 Test: blockdev write zeroes read block ...passed 00:06:14.052 Test: blockdev write zeroes read no split ...passed 00:06:14.052 Test: blockdev write zeroes read split ...passed 00:06:14.052 Test: blockdev write zeroes read split partial ...passed 00:06:14.052 Test: blockdev reset ...[2024-11-19 06:30:05.911364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:14.052 passed 00:06:14.052 Test: blockdev write read 8 blocks ...[2024-11-19 06:30:05.916435] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:14.052 passed 00:06:14.052 Test: blockdev write read size > 128k ...passed 00:06:14.052 Test: blockdev write read invalid size ...passed 00:06:14.052 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.052 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.052 Test: blockdev write read max offset ...passed 00:06:14.052 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.052 Test: blockdev writev readv 8 blocks ...passed 00:06:14.052 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.052 Test: blockdev writev readv block ...passed 00:06:14.052 Test: blockdev writev readv size > 128k ...passed 00:06:14.052 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.052 Test: blockdev comparev and writev ...[2024-11-19 06:30:05.932196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5238000 len:0x1000 00:06:14.052 [2024-11-19 06:30:05.932278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.052 passed 00:06:14.052 Test: blockdev nvme passthru rw ...passed 00:06:14.052 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.052 Test: blockdev nvme admin passthru ...[2024-11-19 06:30:05.933175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.052 [2024-11-19 06:30:05.933207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.052 passed 00:06:14.052 Test: blockdev copy ...passed 00:06:14.052 Suite: bdevio tests on: Nvme1n1 00:06:14.052 Test: blockdev write read block ...passed 00:06:14.052 Test: blockdev write zeroes read block ...passed 00:06:14.052 Test: blockdev write zeroes read no split ...passed 00:06:14.052 Test: blockdev write zeroes read split ...passed 00:06:14.311 Test: blockdev write zeroes read split partial ...passed 00:06:14.311 Test: blockdev reset ...[2024-11-19 06:30:05.988428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:14.311 [2024-11-19 06:30:05.991467] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:14.311 Test: blockdev write read 8 blocks ...uccessful. 00:06:14.311 passed 00:06:14.311 Test: blockdev write read size > 128k ...passed 00:06:14.311 Test: blockdev write read invalid size ...passed 00:06:14.311 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.311 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.311 Test: blockdev write read max offset ...passed 00:06:14.311 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.311 Test: blockdev writev readv 8 blocks ...passed 00:06:14.311 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.311 Test: blockdev writev readv block ...passed 00:06:14.311 Test: blockdev writev readv size > 128k ...passed 00:06:14.311 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.311 Test: blockdev comparev and writev ...[2024-11-19 06:30:05.999363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5234000 len:0x1000 00:06:14.311 [2024-11-19 06:30:05.999594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.311 passed 00:06:14.311 Test: blockdev nvme passthru rw ...passed 00:06:14.311 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.311 Test: blockdev nvme admin passthru ...[2024-11-19 06:30:06.000604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.311 [2024-11-19 06:30:06.000641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.311 passed 00:06:14.311 Test: blockdev copy ...passed 00:06:14.311 Suite: bdevio tests on: Nvme0n1 00:06:14.311 Test: blockdev write read block ...passed 00:06:14.311 Test: blockdev write zeroes read block ...passed 00:06:14.311 Test: blockdev write zeroes read no split ...passed 00:06:14.311 Test: blockdev write zeroes read split ...passed 00:06:14.311 Test: blockdev write zeroes read split partial ...passed 00:06:14.311 Test: blockdev reset ...[2024-11-19 06:30:06.056483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:14.311 passed 00:06:14.311 Test: blockdev write read 8 blocks ...[2024-11-19 06:30:06.059412] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:14.311 passed 00:06:14.311 Test: blockdev write read size > 128k ...passed 00:06:14.311 Test: blockdev write read invalid size ...passed 00:06:14.311 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.311 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.311 Test: blockdev write read max offset ...passed 00:06:14.311 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.311 Test: blockdev writev readv 8 blocks ...passed 00:06:14.311 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.311 Test: blockdev writev readv block ...passed 00:06:14.311 Test: blockdev writev readv size > 128k ...passed 00:06:14.311 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.311 Test: blockdev comparev and writev ...passed 00:06:14.311 Test: blockdev nvme passthru rw ...[2024-11-19 06:30:06.066090] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:14.311 separate metadata which is not supported yet. 00:06:14.311 passed 00:06:14.311 Test: blockdev nvme passthru vendor specific ...[2024-11-19 06:30:06.066755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:06:14.311 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:14.311 [2024-11-19 06:30:06.067198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:14.311 passed 00:06:14.311 Test: blockdev copy ...passed 00:06:14.311 00:06:14.311 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.311 suites 6 6 n/a 0 0 00:06:14.311 tests 138 138 138 0 0 00:06:14.311 asserts 893 893 893 0 n/a 00:06:14.311 00:06:14.311 Elapsed time = 1.089 seconds 00:06:14.311 0 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59988 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59988 ']' 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59988 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59988 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59988' 00:06:14.311 killing process with pid 59988 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59988 00:06:14.311 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59988 00:06:15.244 06:30:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:15.244 00:06:15.244 real 0m2.220s 00:06:15.244 user 0m5.557s 00:06:15.244 sys 0m0.318s 00:06:15.244 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.244 06:30:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:15.244 ************************************ 00:06:15.244 END TEST bdev_bounds 00:06:15.244 ************************************ 00:06:15.244 06:30:06 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:15.244 06:30:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:15.244 06:30:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.244 06:30:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:15.244 ************************************ 00:06:15.244 START TEST bdev_nbd 00:06:15.244 ************************************ 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:15.244 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60042 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60042 /var/tmp/spdk-nbd.sock 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60042 ']' 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.245 06:30:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:15.245 [2024-11-19 06:30:06.968351] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:15.245 [2024-11-19 06:30:06.968594] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.245 [2024-11-19 06:30:07.127844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.503 [2024-11-19 06:30:07.240813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.069 06:30:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.328 1+0 records in 00:06:16.328 1+0 records out 00:06:16.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350425 s, 11.7 MB/s 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.328 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.586 1+0 records in 00:06:16.586 1+0 records out 00:06:16.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272229 s, 15.0 MB/s 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.586 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.845 1+0 records in 00:06:16.845 1+0 records out 00:06:16.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423761 s, 9.7 MB/s 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.845 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.103 1+0 records in 00:06:17.103 1+0 records out 00:06:17.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747596 s, 5.5 MB/s 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.103 06:30:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.103 1+0 records in 00:06:17.103 1+0 records out 00:06:17.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658449 s, 6.2 MB/s 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.103 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.362 1+0 records in 00:06:17.362 1+0 records out 00:06:17.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040862 s, 10.0 MB/s 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.362 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.620 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd0", 00:06:17.620 "bdev_name": "Nvme0n1" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd1", 00:06:17.620 "bdev_name": "Nvme1n1" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd2", 00:06:17.620 "bdev_name": "Nvme2n1" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd3", 00:06:17.620 "bdev_name": "Nvme2n2" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd4", 00:06:17.620 "bdev_name": "Nvme2n3" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd5", 00:06:17.620 "bdev_name": "Nvme3n1" 00:06:17.620 } 00:06:17.620 ]' 00:06:17.620 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:17.620 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd0", 00:06:17.620 "bdev_name": "Nvme0n1" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd1", 00:06:17.620 "bdev_name": "Nvme1n1" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd2", 00:06:17.620 "bdev_name": "Nvme2n1" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd3", 00:06:17.620 "bdev_name": "Nvme2n2" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd4", 00:06:17.621 "bdev_name": "Nvme2n3" 00:06:17.621 }, 00:06:17.621 { 00:06:17.621 "nbd_device": "/dev/nbd5", 00:06:17.621 "bdev_name": "Nvme3n1" 00:06:17.621 } 00:06:17.621 ]' 00:06:17.621 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:17.621 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:17.621 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.621 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:17.621 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.621 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:17.621 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.621 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.878 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.137 06:30:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:18.137 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:18.137 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:18.137 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:18.137 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.137 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.137 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.396 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.654 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.912 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.171 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.172 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:19.172 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.172 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.172 06:30:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:19.429 /dev/nbd0 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.429 1+0 records in 00:06:19.429 1+0 records out 00:06:19.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035553 s, 11.5 MB/s 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.429 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.430 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:19.687 /dev/nbd1 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.687 1+0 records in 00:06:19.687 1+0 records out 00:06:19.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307417 s, 13.3 MB/s 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.687 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:19.945 /dev/nbd10 00:06:19.945 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:19.945 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.946 1+0 records in 00:06:19.946 1+0 records out 00:06:19.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512679 s, 8.0 MB/s 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:19.946 /dev/nbd11 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.946 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.206 1+0 records in 00:06:20.206 1+0 records out 00:06:20.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429875 s, 9.5 MB/s 00:06:20.206 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.206 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.206 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.206 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.206 06:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.206 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.206 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.206 06:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:20.206 /dev/nbd12 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.206 1+0 records in 00:06:20.206 1+0 records out 00:06:20.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820147 s, 5.0 MB/s 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.206 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:20.465 /dev/nbd13 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.465 1+0 records in 00:06:20.465 1+0 records out 00:06:20.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447826 s, 9.1 MB/s 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.465 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd0", 00:06:20.724 "bdev_name": "Nvme0n1" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd1", 00:06:20.724 "bdev_name": "Nvme1n1" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd10", 00:06:20.724 "bdev_name": "Nvme2n1" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd11", 00:06:20.724 "bdev_name": "Nvme2n2" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd12", 00:06:20.724 "bdev_name": "Nvme2n3" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd13", 00:06:20.724 "bdev_name": "Nvme3n1" 00:06:20.724 } 00:06:20.724 ]' 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd0", 00:06:20.724 "bdev_name": "Nvme0n1" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd1", 00:06:20.724 "bdev_name": "Nvme1n1" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd10", 00:06:20.724 "bdev_name": "Nvme2n1" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd11", 00:06:20.724 "bdev_name": "Nvme2n2" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd12", 00:06:20.724 "bdev_name": "Nvme2n3" 00:06:20.724 }, 00:06:20.724 { 00:06:20.724 "nbd_device": "/dev/nbd13", 00:06:20.724 "bdev_name": "Nvme3n1" 00:06:20.724 } 00:06:20.724 ]' 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.724 /dev/nbd1 00:06:20.724 /dev/nbd10 00:06:20.724 /dev/nbd11 00:06:20.724 /dev/nbd12 00:06:20.724 /dev/nbd13' 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.724 /dev/nbd1 00:06:20.724 /dev/nbd10 00:06:20.724 /dev/nbd11 00:06:20.724 /dev/nbd12 00:06:20.724 /dev/nbd13' 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:20.724 256+0 records in 00:06:20.724 256+0 records out 00:06:20.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00842291 s, 124 MB/s 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.724 256+0 records in 00:06:20.724 256+0 records out 00:06:20.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0566117 s, 18.5 MB/s 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.724 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.983 256+0 records in 00:06:20.983 256+0 records out 00:06:20.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0644598 s, 16.3 MB/s 00:06:20.983 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.983 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:20.983 256+0 records in 00:06:20.983 256+0 records out 00:06:20.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0793908 s, 13.2 MB/s 00:06:20.983 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.983 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:20.983 256+0 records in 00:06:20.983 256+0 records out 00:06:20.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0773168 s, 13.6 MB/s 00:06:20.983 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.983 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:21.239 256+0 records in 00:06:21.239 256+0 records out 00:06:21.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0895077 s, 11.7 MB/s 00:06:21.239 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.239 06:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:21.239 256+0 records in 00:06:21.239 256+0 records out 00:06:21.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101354 s, 10.3 MB/s 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.239 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.496 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.754 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:22.011 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:22.269 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:22.269 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.269 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.269 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:22.269 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.269 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.269 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.269 06:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.269 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.527 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:22.784 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:23.041 malloc_lvol_verify 00:06:23.041 06:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:23.299 3cc6f7ad-ffd1-4a24-b9e1-06262d18472d 00:06:23.299 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:23.557 2c942111-049c-4545-b24a-e9aaf42abda1 00:06:23.557 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:23.816 /dev/nbd0 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:23.816 mke2fs 1.47.0 (5-Feb-2023) 00:06:23.816 Discarding device blocks: 0/4096 done 00:06:23.816 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:23.816 00:06:23.816 Allocating group tables: 0/1 done 00:06:23.816 Writing inode tables: 0/1 done 00:06:23.816 Creating journal (1024 blocks): done 00:06:23.816 Writing superblocks and filesystem accounting information: 0/1 done 00:06:23.816 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60042 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60042 ']' 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60042 00:06:23.816 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:24.074 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.074 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60042 00:06:24.074 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.074 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.074 killing process with pid 60042 00:06:24.074 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60042' 00:06:24.074 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60042 00:06:24.074 06:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60042 00:06:24.641 06:30:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:24.641 00:06:24.641 real 0m9.507s 00:06:24.641 user 0m13.621s 00:06:24.641 sys 0m3.036s 00:06:24.641 06:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.641 06:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:24.641 ************************************ 00:06:24.641 END TEST bdev_nbd 00:06:24.641 ************************************ 00:06:24.641 06:30:16 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:24.641 06:30:16 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:24.641 skipping fio tests on NVMe due to multi-ns failures. 00:06:24.641 06:30:16 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:24.641 06:30:16 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:24.641 06:30:16 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:24.641 06:30:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:24.641 06:30:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.641 06:30:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.641 ************************************ 00:06:24.641 START TEST bdev_verify 00:06:24.641 ************************************ 00:06:24.641 06:30:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:24.641 [2024-11-19 06:30:16.516207] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:24.641 [2024-11-19 06:30:16.516321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60409 ] 00:06:24.899 [2024-11-19 06:30:16.670790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.899 [2024-11-19 06:30:16.763115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.899 [2024-11-19 06:30:16.763188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.466 Running I/O for 5 seconds... 00:06:27.829 23872.00 IOPS, 93.25 MiB/s [2024-11-19T06:30:20.699Z] 22656.00 IOPS, 88.50 MiB/s [2024-11-19T06:30:21.639Z] 22805.33 IOPS, 89.08 MiB/s [2024-11-19T06:30:22.585Z] 22640.00 IOPS, 88.44 MiB/s [2024-11-19T06:30:22.585Z] 22297.60 IOPS, 87.10 MiB/s 00:06:30.656 Latency(us) 00:06:30.656 [2024-11-19T06:30:22.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:30.656 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x0 length 0xbd0bd 00:06:30.656 Nvme0n1 : 5.07 1893.25 7.40 0.00 0.00 67427.67 13712.15 68560.74 00:06:30.656 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:30.656 Nvme0n1 : 5.07 1791.36 7.00 0.00 0.00 71274.92 14720.39 72997.02 00:06:30.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x0 length 0xa0000 00:06:30.656 Nvme1n1 : 5.07 1892.50 7.39 0.00 0.00 67377.57 15728.64 60898.07 00:06:30.656 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0xa0000 length 0xa0000 00:06:30.656 Nvme1n1 : 5.08 1790.33 6.99 0.00 0.00 71160.14 17039.36 62107.96 00:06:30.656 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x0 length 0x80000 00:06:30.656 Nvme2n1 : 5.08 1890.86 7.39 0.00 0.00 67292.22 17140.18 56865.08 00:06:30.656 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x80000 length 0x80000 00:06:30.656 Nvme2n1 : 5.08 1789.25 6.99 0.00 0.00 71049.13 17644.31 63317.86 00:06:30.656 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x0 length 0x80000 00:06:30.656 Nvme2n2 : 5.08 1890.35 7.38 0.00 0.00 67176.57 18148.43 58478.28 00:06:30.656 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x80000 length 0x80000 00:06:30.656 Nvme2n2 : 5.08 1787.88 6.98 0.00 0.00 70921.20 17845.96 64124.46 00:06:30.656 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x0 length 0x80000 00:06:30.656 Nvme2n3 : 5.08 1889.72 7.38 0.00 0.00 67064.23 16333.59 60091.47 00:06:30.656 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x80000 length 0x80000 00:06:30.656 Nvme2n3 : 5.09 1786.11 6.98 0.00 0.00 70804.11 12552.66 65737.65 00:06:30.656 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x0 length 0x20000 00:06:30.656 Nvme3n1 : 5.08 1888.44 7.38 0.00 0.00 66962.83 9779.99 60898.07 00:06:30.656 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.656 Verification LBA range: start 0x20000 length 0x20000 00:06:30.656 Nvme3n1 : 5.09 1785.62 6.98 0.00 0.00 70701.86 9225.45 67350.84 00:06:30.656 [2024-11-19T06:30:22.585Z] =================================================================================================================== 00:06:30.656 [2024-11-19T06:30:22.585Z] Total : 22075.68 86.23 0.00 0.00 69049.42 9225.45 72997.02 00:06:32.045 00:06:32.045 real 0m7.311s 00:06:32.045 user 0m13.613s 00:06:32.045 sys 0m0.260s 00:06:32.045 06:30:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.045 ************************************ 00:06:32.045 END TEST bdev_verify 00:06:32.045 ************************************ 00:06:32.045 06:30:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:32.045 06:30:23 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:32.045 06:30:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:32.045 06:30:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.045 06:30:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.045 ************************************ 00:06:32.045 START TEST bdev_verify_big_io 00:06:32.045 ************************************ 00:06:32.045 06:30:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:32.045 [2024-11-19 06:30:23.917079] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:32.045 [2024-11-19 06:30:23.917237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60507 ] 00:06:32.307 [2024-11-19 06:30:24.090338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.568 [2024-11-19 06:30:24.253525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.568 [2024-11-19 06:30:24.253621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.142 Running I/O for 5 seconds... 00:06:36.781 880.00 IOPS, 55.00 MiB/s [2024-11-19T06:30:30.613Z] 1653.50 IOPS, 103.34 MiB/s [2024-11-19T06:30:31.181Z] 1853.00 IOPS, 115.81 MiB/s [2024-11-19T06:30:31.181Z] 2145.00 IOPS, 134.06 MiB/s 00:06:39.252 Latency(us) 00:06:39.252 [2024-11-19T06:30:31.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.252 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x0 length 0xbd0b 00:06:39.252 Nvme0n1 : 5.65 113.19 7.07 0.00 0.00 1093153.79 24903.68 1064707.94 00:06:39.252 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:39.252 Nvme0n1 : 5.79 121.54 7.60 0.00 0.00 1005745.95 25407.80 1058255.16 00:06:39.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x0 length 0xa000 00:06:39.252 Nvme1n1 : 5.71 116.31 7.27 0.00 0.00 1031630.78 52428.80 935652.43 00:06:39.252 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0xa000 length 0xa000 00:06:39.252 Nvme1n1 : 5.89 126.78 7.92 0.00 0.00 952061.74 49605.71 896935.78 00:06:39.252 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x0 length 0x8000 00:06:39.252 Nvme2n1 : 5.79 121.59 7.60 0.00 0.00 958358.88 77030.01 935652.43 00:06:39.252 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x8000 length 0x8000 00:06:39.252 Nvme2n1 : 5.89 127.07 7.94 0.00 0.00 920529.41 50009.01 903388.55 00:06:39.252 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x0 length 0x8000 00:06:39.252 Nvme2n2 : 5.79 121.54 7.60 0.00 0.00 925853.54 55251.89 935652.43 00:06:39.252 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x8000 length 0x8000 00:06:39.252 Nvme2n2 : 5.90 126.20 7.89 0.00 0.00 894908.99 50009.01 916294.10 00:06:39.252 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x0 length 0x8000 00:06:39.252 Nvme2n3 : 5.94 133.71 8.36 0.00 0.00 818190.90 12199.78 1387346.71 00:06:39.252 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x8000 length 0x8000 00:06:39.252 Nvme2n3 : 5.90 130.23 8.14 0.00 0.00 844854.09 44362.83 929199.66 00:06:39.252 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x0 length 0x2000 00:06:39.252 Nvme3n1 : 5.96 152.06 9.50 0.00 0.00 699751.04 863.31 1690627.15 00:06:39.252 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.252 Verification LBA range: start 0x2000 length 0x2000 00:06:39.252 Nvme3n1 : 5.95 146.57 9.16 0.00 0.00 730254.34 664.81 955010.76 00:06:39.252 [2024-11-19T06:30:31.181Z] =================================================================================================================== 00:06:39.252 [2024-11-19T06:30:31.181Z] Total : 1536.78 96.05 0.00 0.00 895384.46 664.81 1690627.15 00:06:40.630 00:06:40.630 real 0m8.453s 00:06:40.630 user 0m15.760s 00:06:40.630 sys 0m0.369s 00:06:40.630 06:30:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.630 06:30:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:40.630 ************************************ 00:06:40.630 END TEST bdev_verify_big_io 00:06:40.630 ************************************ 00:06:40.630 06:30:32 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.630 06:30:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:40.631 06:30:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.631 06:30:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.631 ************************************ 00:06:40.631 START TEST bdev_write_zeroes 00:06:40.631 ************************************ 00:06:40.631 06:30:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.631 [2024-11-19 06:30:32.411374] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:40.631 [2024-11-19 06:30:32.411602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60619 ] 00:06:40.892 [2024-11-19 06:30:32.562513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.892 [2024-11-19 06:30:32.657324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.463 Running I/O for 1 seconds... 00:06:42.401 67136.00 IOPS, 262.25 MiB/s 00:06:42.401 Latency(us) 00:06:42.401 [2024-11-19T06:30:34.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.401 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.401 Nvme0n1 : 1.02 11182.80 43.68 0.00 0.00 11423.95 5873.03 24197.91 00:06:42.401 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.401 Nvme1n1 : 1.02 11168.76 43.63 0.00 0.00 11424.24 9376.69 23895.43 00:06:42.401 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.401 Nvme2n1 : 1.02 11155.19 43.57 0.00 0.00 11396.80 9225.45 21576.47 00:06:42.401 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.401 Nvme2n2 : 1.02 11141.67 43.52 0.00 0.00 11395.17 9326.28 21576.47 00:06:42.401 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.401 Nvme2n3 : 1.02 11128.15 43.47 0.00 0.00 11369.79 9326.28 19660.80 00:06:42.401 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.401 Nvme3n1 : 1.02 11052.00 43.17 0.00 0.00 11435.27 8822.15 25407.80 00:06:42.401 [2024-11-19T06:30:34.330Z] =================================================================================================================== 00:06:42.401 [2024-11-19T06:30:34.330Z] Total : 66828.57 261.05 0.00 0.00 11407.51 5873.03 25407.80 00:06:43.358 ************************************ 00:06:43.358 END TEST bdev_write_zeroes 00:06:43.358 ************************************ 00:06:43.358 00:06:43.358 real 0m2.587s 00:06:43.358 user 0m2.287s 00:06:43.358 sys 0m0.183s 00:06:43.358 06:30:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.358 06:30:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:43.358 06:30:34 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.358 06:30:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:43.358 06:30:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.358 06:30:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.358 ************************************ 00:06:43.358 START TEST bdev_json_nonenclosed 00:06:43.358 ************************************ 00:06:43.358 06:30:35 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.358 [2024-11-19 06:30:35.077221] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:43.358 [2024-11-19 06:30:35.077526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60674 ] 00:06:43.358 [2024-11-19 06:30:35.236081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.619 [2024-11-19 06:30:35.344209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.619 [2024-11-19 06:30:35.344294] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:43.619 [2024-11-19 06:30:35.344314] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:43.619 [2024-11-19 06:30:35.344322] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.619 00:06:43.619 real 0m0.485s 00:06:43.619 user 0m0.275s 00:06:43.619 sys 0m0.106s 00:06:43.619 06:30:35 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.619 06:30:35 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:43.619 ************************************ 00:06:43.619 END TEST bdev_json_nonenclosed 00:06:43.619 ************************************ 00:06:43.619 06:30:35 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.619 06:30:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:43.619 06:30:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.619 06:30:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.880 ************************************ 00:06:43.880 START TEST bdev_json_nonarray 00:06:43.880 ************************************ 00:06:43.880 06:30:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.880 [2024-11-19 06:30:35.624304] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:43.880 [2024-11-19 06:30:35.624429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60694 ] 00:06:43.880 [2024-11-19 06:30:35.780790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.140 [2024-11-19 06:30:35.890425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.140 [2024-11-19 06:30:35.890510] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:44.140 [2024-11-19 06:30:35.890527] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:44.140 [2024-11-19 06:30:35.890536] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.140 00:06:44.140 real 0m0.480s 00:06:44.140 user 0m0.282s 00:06:44.140 sys 0m0.094s 00:06:44.140 ************************************ 00:06:44.140 END TEST bdev_json_nonarray 00:06:44.140 ************************************ 00:06:44.140 06:30:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.140 06:30:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:44.401 06:30:36 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:44.401 ************************************ 00:06:44.401 END TEST blockdev_nvme 00:06:44.401 ************************************ 00:06:44.401 00:06:44.401 real 0m36.374s 00:06:44.401 user 0m56.012s 00:06:44.401 sys 0m5.384s 00:06:44.401 06:30:36 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.401 06:30:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.401 06:30:36 -- spdk/autotest.sh@209 -- # uname -s 00:06:44.401 06:30:36 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:44.401 06:30:36 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:44.401 06:30:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.401 06:30:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.401 06:30:36 -- common/autotest_common.sh@10 -- # set +x 00:06:44.401 ************************************ 00:06:44.401 START TEST blockdev_nvme_gpt 00:06:44.401 ************************************ 00:06:44.401 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:44.401 * Looking for test storage... 00:06:44.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:44.401 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.401 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.401 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.401 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.401 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.402 06:30:36 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:44.402 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.402 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.402 --rc genhtml_branch_coverage=1 00:06:44.402 --rc genhtml_function_coverage=1 00:06:44.402 --rc genhtml_legend=1 00:06:44.402 --rc geninfo_all_blocks=1 00:06:44.402 --rc geninfo_unexecuted_blocks=1 00:06:44.402 00:06:44.402 ' 00:06:44.402 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.402 --rc genhtml_branch_coverage=1 00:06:44.402 --rc genhtml_function_coverage=1 00:06:44.402 --rc genhtml_legend=1 00:06:44.402 --rc geninfo_all_blocks=1 00:06:44.402 --rc geninfo_unexecuted_blocks=1 00:06:44.402 00:06:44.402 ' 00:06:44.402 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.402 --rc genhtml_branch_coverage=1 00:06:44.402 --rc genhtml_function_coverage=1 00:06:44.402 --rc genhtml_legend=1 00:06:44.402 --rc geninfo_all_blocks=1 00:06:44.402 --rc geninfo_unexecuted_blocks=1 00:06:44.402 00:06:44.402 ' 00:06:44.402 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.402 --rc genhtml_branch_coverage=1 00:06:44.402 --rc genhtml_function_coverage=1 00:06:44.402 --rc genhtml_legend=1 00:06:44.402 --rc geninfo_all_blocks=1 00:06:44.402 --rc geninfo_unexecuted_blocks=1 00:06:44.402 00:06:44.402 ' 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:44.402 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60778 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60778 00:06:44.663 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60778 ']' 00:06:44.663 06:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:44.663 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.663 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.663 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.663 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.663 06:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.664 [2024-11-19 06:30:36.425848] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:44.664 [2024-11-19 06:30:36.426151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60778 ] 00:06:44.664 [2024-11-19 06:30:36.578405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.924 [2024-11-19 06:30:36.679973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.496 06:30:37 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.496 06:30:37 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:45.496 06:30:37 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:45.496 06:30:37 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:45.496 06:30:37 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:45.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.017 Waiting for block devices as requested 00:06:46.017 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.017 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.017 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.298 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:51.586 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:51.586 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.586 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:51.587 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:51.587 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:51.587 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:51.587 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.587 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:51.587 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:51.587 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:51.587 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:51.587 06:30:43 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:51.587 BYT; 00:06:51.587 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:51.587 BYT; 00:06:51.587 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:51.587 06:30:43 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:51.587 06:30:43 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:52.523 The operation has completed successfully. 00:06:52.523 06:30:44 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:53.456 The operation has completed successfully. 00:06:53.457 06:30:45 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:53.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.281 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.281 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.281 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.281 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:54.281 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.281 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.281 [] 00:06:54.281 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.281 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:54.281 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:54.281 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:54.281 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:54.538 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:54.538 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.538 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.796 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.796 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:54.796 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.796 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.796 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.796 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:54.796 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:54.796 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.796 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.796 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:54.797 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7af22f8b-0a85-4ab7-a681-409965e63a8f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7af22f8b-0a85-4ab7-a681-409965e63a8f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "19dfa5fd-06db-4062-8555-d5844f3cc8ad"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "19dfa5fd-06db-4062-8555-d5844f3cc8ad",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0f44b05f-1246-4663-a956-719165815daf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0f44b05f-1246-4663-a956-719165815daf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "774af54f-5893-4dc3-9a0b-c3018cf07631"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "774af54f-5893-4dc3-9a0b-c3018cf07631",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9d4d55ef-10c9-4f87-94d8-3c6ea9459e27"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9d4d55ef-10c9-4f87-94d8-3c6ea9459e27",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:54.797 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:54.797 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:54.797 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:54.797 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:54.797 06:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60778 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60778 ']' 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60778 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60778 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.797 killing process with pid 60778 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60778' 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60778 00:06:54.797 06:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60778 00:06:56.170 06:30:48 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:56.170 06:30:48 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:56.170 06:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:56.170 06:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.170 06:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.170 ************************************ 00:06:56.170 START TEST bdev_hello_world 00:06:56.170 ************************************ 00:06:56.170 06:30:48 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:56.429 [2024-11-19 06:30:48.151572] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:56.429 [2024-11-19 06:30:48.151702] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61399 ] 00:06:56.429 [2024-11-19 06:30:48.309999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.686 [2024-11-19 06:30:48.403279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.261 [2024-11-19 06:30:48.921776] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:57.261 [2024-11-19 06:30:48.921832] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:57.261 [2024-11-19 06:30:48.921853] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:57.261 [2024-11-19 06:30:48.924348] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:57.261 [2024-11-19 06:30:48.924748] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:57.261 [2024-11-19 06:30:48.924775] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:57.261 [2024-11-19 06:30:48.925062] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:57.261 00:06:57.261 [2024-11-19 06:30:48.925085] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:57.828 00:06:57.828 real 0m1.580s 00:06:57.828 user 0m1.262s 00:06:57.828 sys 0m0.211s 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.828 ************************************ 00:06:57.828 END TEST bdev_hello_world 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:57.828 ************************************ 00:06:57.828 06:30:49 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:57.828 06:30:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.828 06:30:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.828 06:30:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.828 ************************************ 00:06:57.828 START TEST bdev_bounds 00:06:57.828 ************************************ 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61435 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61435' 00:06:57.828 Process bdevio pid: 61435 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61435 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61435 ']' 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.828 06:30:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:58.092 [2024-11-19 06:30:49.785196] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:58.092 [2024-11-19 06:30:49.785342] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61435 ] 00:06:58.092 [2024-11-19 06:30:49.951090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.352 [2024-11-19 06:30:50.066438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.352 [2024-11-19 06:30:50.066637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.352 [2024-11-19 06:30:50.066743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.918 06:30:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.918 06:30:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:58.918 06:30:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:58.918 I/O targets: 00:06:58.918 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:58.918 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:58.918 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:58.918 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.918 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.918 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.918 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:58.918 00:06:58.918 00:06:58.918 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.918 http://cunit.sourceforge.net/ 00:06:58.918 00:06:58.918 00:06:58.918 Suite: bdevio tests on: Nvme3n1 00:06:58.918 Test: blockdev write read block ...passed 00:06:58.918 Test: blockdev write zeroes read block ...passed 00:06:58.918 Test: blockdev write zeroes read no split ...passed 00:06:58.918 Test: blockdev write zeroes read split ...passed 00:06:58.918 Test: blockdev write zeroes read split partial ...passed 00:06:58.918 Test: blockdev reset ...[2024-11-19 06:30:50.803451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:58.918 [2024-11-19 06:30:50.806462] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:58.918 passed 00:06:58.918 Test: blockdev write read 8 blocks ...passed 00:06:58.918 Test: blockdev write read size > 128k ...passed 00:06:58.918 Test: blockdev write read invalid size ...passed 00:06:58.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.918 Test: blockdev write read max offset ...passed 00:06:58.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.918 Test: blockdev writev readv 8 blocks ...passed 00:06:58.918 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.918 Test: blockdev writev readv block ...passed 00:06:58.918 Test: blockdev writev readv size > 128k ...passed 00:06:58.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.918 Test: blockdev comparev and writev ...[2024-11-19 06:30:50.814711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7a04000 len:0x1000 00:06:58.918 [2024-11-19 06:30:50.814762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.918 passed 00:06:58.918 Test: blockdev nvme passthru rw ...passed 00:06:58.918 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.918 Test: blockdev nvme admin passthru ...[2024-11-19 06:30:50.815589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.918 [2024-11-19 06:30:50.815624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.918 passed 00:06:58.918 Test: blockdev copy ...passed 00:06:58.918 Suite: bdevio tests on: Nvme2n3 00:06:58.918 Test: blockdev write read block ...passed 00:06:58.918 Test: blockdev write zeroes read block ...passed 00:06:58.918 Test: blockdev write zeroes read no split ...passed 00:06:59.177 Test: blockdev write zeroes read split ...passed 00:06:59.177 Test: blockdev write zeroes read split partial ...passed 00:06:59.177 Test: blockdev reset ...[2024-11-19 06:30:50.880184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:59.177 [2024-11-19 06:30:50.883180] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:59.177 passed 00:06:59.177 Test: blockdev write read 8 blocks ...passed 00:06:59.177 Test: blockdev write read size > 128k ...passed 00:06:59.177 Test: blockdev write read invalid size ...passed 00:06:59.177 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.177 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.177 Test: blockdev write read max offset ...passed 00:06:59.177 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.177 Test: blockdev writev readv 8 blocks ...passed 00:06:59.177 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.177 Test: blockdev writev readv block ...passed 00:06:59.177 Test: blockdev writev readv size > 128k ...passed 00:06:59.177 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.177 Test: blockdev comparev and writev ...[2024-11-19 06:30:50.889946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7a02000 len:0x1000 00:06:59.177 [2024-11-19 06:30:50.889989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.177 passed 00:06:59.177 Test: blockdev nvme passthru rw ...passed 00:06:59.177 Test: blockdev nvme passthru vendor specific ...passed 00:06:59.177 Test: blockdev nvme admin passthru ...[2024-11-19 06:30:50.891963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:59.177 [2024-11-19 06:30:50.892035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:59.177 passed 00:06:59.177 Test: blockdev copy ...passed 00:06:59.177 Suite: bdevio tests on: Nvme2n2 00:06:59.177 Test: blockdev write read block ...passed 00:06:59.177 Test: blockdev write zeroes read block ...passed 00:06:59.177 Test: blockdev write zeroes read no split ...passed 00:06:59.177 Test: blockdev write zeroes read split ...passed 00:06:59.177 Test: blockdev write zeroes read split partial ...passed 00:06:59.177 Test: blockdev reset ...[2024-11-19 06:30:50.946837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:59.177 [2024-11-19 06:30:50.949454] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:59.177 passed 00:06:59.177 Test: blockdev write read 8 blocks ...passed 00:06:59.177 Test: blockdev write read size > 128k ...passed 00:06:59.177 Test: blockdev write read invalid size ...passed 00:06:59.177 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.177 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.177 Test: blockdev write read max offset ...passed 00:06:59.177 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.177 Test: blockdev writev readv 8 blocks ...passed 00:06:59.177 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.177 Test: blockdev writev readv block ...passed 00:06:59.177 Test: blockdev writev readv size > 128k ...passed 00:06:59.177 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.177 Test: blockdev comparev and writev ...[2024-11-19 06:30:50.955698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dce38000 len:0x1000 00:06:59.177 [2024-11-19 06:30:50.955740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.177 passed 00:06:59.177 Test: blockdev nvme passthru rw ...passed 00:06:59.177 Test: blockdev nvme passthru vendor specific ...passed 00:06:59.177 Test: blockdev nvme admin passthru ...[2024-11-19 06:30:50.956455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:59.177 [2024-11-19 06:30:50.956479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:59.177 passed 00:06:59.177 Test: blockdev copy ...passed 00:06:59.177 Suite: bdevio tests on: Nvme2n1 00:06:59.177 Test: blockdev write read block ...passed 00:06:59.177 Test: blockdev write zeroes read block ...passed 00:06:59.177 Test: blockdev write zeroes read no split ...passed 00:06:59.177 Test: blockdev write zeroes read split ...passed 00:06:59.177 Test: blockdev write zeroes read split partial ...passed 00:06:59.177 Test: blockdev reset ...[2024-11-19 06:30:51.000428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:59.177 [2024-11-19 06:30:51.003242] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:59.177 passed 00:06:59.177 Test: blockdev write read 8 blocks ...passed 00:06:59.177 Test: blockdev write read size > 128k ...passed 00:06:59.178 Test: blockdev write read invalid size ...passed 00:06:59.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.178 Test: blockdev write read max offset ...passed 00:06:59.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.178 Test: blockdev writev readv 8 blocks ...passed 00:06:59.178 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.178 Test: blockdev writev readv block ...passed 00:06:59.178 Test: blockdev writev readv size > 128k ...passed 00:06:59.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.178 Test: blockdev comparev and writev ...[2024-11-19 06:30:51.009324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dce34000 len:0x1000 00:06:59.178 [2024-11-19 06:30:51.009362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.178 passed 00:06:59.178 Test: blockdev nvme passthru rw ...passed 00:06:59.178 Test: blockdev nvme passthru vendor specific ...passed 00:06:59.178 Test: blockdev nvme admin passthru ...[2024-11-19 06:30:51.009916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:59.178 [2024-11-19 06:30:51.009948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:59.178 passed 00:06:59.178 Test: blockdev copy ...passed 00:06:59.178 Suite: bdevio tests on: Nvme1n1p2 00:06:59.178 Test: blockdev write read block ...passed 00:06:59.178 Test: blockdev write zeroes read block ...passed 00:06:59.178 Test: blockdev write zeroes read no split ...passed 00:06:59.178 Test: blockdev write zeroes read split ...passed 00:06:59.178 Test: blockdev write zeroes read split partial ...passed 00:06:59.178 Test: blockdev reset ...[2024-11-19 06:30:51.053577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:59.178 [2024-11-19 06:30:51.055968] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:59.178 passed 00:06:59.178 Test: blockdev write read 8 blocks ...passed 00:06:59.178 Test: blockdev write read size > 128k ...passed 00:06:59.178 Test: blockdev write read invalid size ...passed 00:06:59.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.178 Test: blockdev write read max offset ...passed 00:06:59.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.178 Test: blockdev writev readv 8 blocks ...passed 00:06:59.178 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.178 Test: blockdev writev readv block ...passed 00:06:59.178 Test: blockdev writev readv size > 128k ...passed 00:06:59.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.178 Test: blockdev comparev and writev ...[2024-11-19 06:30:51.061726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dce30000 len:0x1000 00:06:59.178 [2024-11-19 06:30:51.061760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.178 passed 00:06:59.178 Test: blockdev nvme passthru rw ...passed 00:06:59.178 Test: blockdev nvme passthru vendor specific ...passed 00:06:59.178 Test: blockdev nvme admin passthru ...passed 00:06:59.178 Test: blockdev copy ...passed 00:06:59.178 Suite: bdevio tests on: Nvme1n1p1 00:06:59.178 Test: blockdev write read block ...passed 00:06:59.178 Test: blockdev write zeroes read block ...passed 00:06:59.178 Test: blockdev write zeroes read no split ...passed 00:06:59.178 Test: blockdev write zeroes read split ...passed 00:06:59.178 Test: blockdev write zeroes read split partial ...passed 00:06:59.178 Test: blockdev reset ...[2024-11-19 06:30:51.103054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:59.178 [2024-11-19 06:30:51.105286] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:59.178 passed 00:06:59.178 Test: blockdev write read 8 blocks ...passed 00:06:59.178 Test: blockdev write read size > 128k ...passed 00:06:59.178 Test: blockdev write read invalid size ...passed 00:06:59.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.178 Test: blockdev write read max offset ...passed 00:06:59.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.178 Test: blockdev writev readv 8 blocks ...passed 00:06:59.435 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.435 Test: blockdev writev readv block ...passed 00:06:59.435 Test: blockdev writev readv size > 128k ...passed 00:06:59.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.435 Test: blockdev comparev and writev ...[2024-11-19 06:30:51.111243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b7c0e000 len:0x1000 00:06:59.435 [2024-11-19 06:30:51.111275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.435 passed 00:06:59.435 Test: blockdev nvme passthru rw ...passed 00:06:59.435 Test: blockdev nvme passthru vendor specific ...passed 00:06:59.435 Test: blockdev nvme admin passthru ...passed 00:06:59.435 Test: blockdev copy ...passed 00:06:59.435 Suite: bdevio tests on: Nvme0n1 00:06:59.435 Test: blockdev write read block ...passed 00:06:59.435 Test: blockdev write zeroes read block ...passed 00:06:59.435 Test: blockdev write zeroes read no split ...passed 00:06:59.435 Test: blockdev write zeroes read split ...passed 00:06:59.435 Test: blockdev write zeroes read split partial ...passed 00:06:59.435 Test: blockdev reset ...[2024-11-19 06:30:51.152724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:59.435 [2024-11-19 06:30:51.155090] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:59.435 passed 00:06:59.435 Test: blockdev write read 8 blocks ...passed 00:06:59.435 Test: blockdev write read size > 128k ...passed 00:06:59.435 Test: blockdev write read invalid size ...passed 00:06:59.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.435 Test: blockdev write read max offset ...passed 00:06:59.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.435 Test: blockdev writev readv 8 blocks ...passed 00:06:59.435 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.435 Test: blockdev writev readv block ...passed 00:06:59.435 Test: blockdev writev readv size > 128k ...passed 00:06:59.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.435 Test: blockdev comparev and writev ...[2024-11-19 06:30:51.160544] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:59.435 separate metadata which is not supported yet. 00:06:59.435 passed 00:06:59.435 Test: blockdev nvme passthru rw ...passed 00:06:59.435 Test: blockdev nvme passthru vendor specific ...passed 00:06:59.435 Test: blockdev nvme admin passthru ...[2024-11-19 06:30:51.161207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:59.435 [2024-11-19 06:30:51.161236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:59.435 passed 00:06:59.435 Test: blockdev copy ...passed 00:06:59.435 00:06:59.435 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.435 suites 7 7 n/a 0 0 00:06:59.435 tests 161 161 161 0 0 00:06:59.435 asserts 1025 1025 1025 0 n/a 00:06:59.435 00:06:59.435 Elapsed time = 1.085 seconds 00:06:59.435 0 00:06:59.435 06:30:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61435 00:06:59.435 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61435 ']' 00:06:59.435 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61435 00:06:59.435 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:59.436 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.436 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61435 00:06:59.436 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.436 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.436 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61435' 00:06:59.436 killing process with pid 61435 00:06:59.436 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61435 00:06:59.436 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61435 00:07:00.001 06:30:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:00.001 00:07:00.001 real 0m2.180s 00:07:00.001 user 0m5.525s 00:07:00.001 sys 0m0.295s 00:07:00.001 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.001 06:30:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:00.001 ************************************ 00:07:00.001 END TEST bdev_bounds 00:07:00.001 ************************************ 00:07:00.001 06:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:00.001 06:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:00.001 06:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.001 06:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.001 ************************************ 00:07:00.001 START TEST bdev_nbd 00:07:00.001 ************************************ 00:07:00.001 06:30:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61489 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61489 /var/tmp/spdk-nbd.sock 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61489 ']' 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:00.259 06:30:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:00.259 [2024-11-19 06:30:51.993400] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:00.259 [2024-11-19 06:30:51.993502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.259 [2024-11-19 06:30:52.148456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.517 [2024-11-19 06:30:52.259143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.082 06:30:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.339 1+0 records in 00:07:01.339 1+0 records out 00:07:01.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341325 s, 12.0 MB/s 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.339 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.597 1+0 records in 00:07:01.597 1+0 records out 00:07:01.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464644 s, 8.8 MB/s 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.597 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.855 1+0 records in 00:07:01.855 1+0 records out 00:07:01.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375255 s, 10.9 MB/s 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.855 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.856 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.856 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:01.856 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.856 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.856 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.856 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.856 1+0 records in 00:07:01.856 1+0 records out 00:07:01.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544684 s, 7.5 MB/s 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.114 1+0 records in 00:07:02.114 1+0 records out 00:07:02.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570324 s, 7.2 MB/s 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.114 06:30:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.371 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.372 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.372 1+0 records in 00:07:02.372 1+0 records out 00:07:02.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394888 s, 10.4 MB/s 00:07:02.372 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.372 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.372 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.372 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.372 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.372 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.372 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.372 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.629 1+0 records in 00:07:02.629 1+0 records out 00:07:02.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611786 s, 6.7 MB/s 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.629 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd0", 00:07:02.886 "bdev_name": "Nvme0n1" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd1", 00:07:02.886 "bdev_name": "Nvme1n1p1" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd2", 00:07:02.886 "bdev_name": "Nvme1n1p2" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd3", 00:07:02.886 "bdev_name": "Nvme2n1" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd4", 00:07:02.886 "bdev_name": "Nvme2n2" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd5", 00:07:02.886 "bdev_name": "Nvme2n3" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd6", 00:07:02.886 "bdev_name": "Nvme3n1" 00:07:02.886 } 00:07:02.886 ]' 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd0", 00:07:02.886 "bdev_name": "Nvme0n1" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd1", 00:07:02.886 "bdev_name": "Nvme1n1p1" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd2", 00:07:02.886 "bdev_name": "Nvme1n1p2" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd3", 00:07:02.886 "bdev_name": "Nvme2n1" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd4", 00:07:02.886 "bdev_name": "Nvme2n2" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd5", 00:07:02.886 "bdev_name": "Nvme2n3" 00:07:02.886 }, 00:07:02.886 { 00:07:02.886 "nbd_device": "/dev/nbd6", 00:07:02.886 "bdev_name": "Nvme3n1" 00:07:02.886 } 00:07:02.886 ]' 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.886 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.144 06:30:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.402 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.402 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.403 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.403 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.403 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.403 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.403 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.403 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.403 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.403 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:03.660 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:03.660 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:03.660 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:03.660 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.660 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.660 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:03.660 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.660 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.661 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.920 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:04.178 06:30:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:04.178 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:04.178 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:04.178 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.178 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.178 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:04.178 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.178 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.178 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.178 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.437 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.695 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:04.954 /dev/nbd0 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.954 1+0 records in 00:07:04.954 1+0 records out 00:07:04.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297114 s, 13.8 MB/s 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.954 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:05.213 /dev/nbd1 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.213 1+0 records in 00:07:05.213 1+0 records out 00:07:05.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286424 s, 14.3 MB/s 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.213 06:30:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:05.471 /dev/nbd10 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.471 1+0 records in 00:07:05.471 1+0 records out 00:07:05.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571752 s, 7.2 MB/s 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.471 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:05.471 /dev/nbd11 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.730 1+0 records in 00:07:05.730 1+0 records out 00:07:05.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416828 s, 9.8 MB/s 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:05.730 /dev/nbd12 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.730 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.730 1+0 records in 00:07:05.730 1+0 records out 00:07:05.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402144 s, 10.2 MB/s 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:06.048 /dev/nbd13 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.048 1+0 records in 00:07:06.048 1+0 records out 00:07:06.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003697 s, 11.1 MB/s 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.048 06:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:06.307 /dev/nbd14 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.307 1+0 records in 00:07:06.307 1+0 records out 00:07:06.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434214 s, 9.4 MB/s 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.307 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.565 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.565 { 00:07:06.565 "nbd_device": "/dev/nbd0", 00:07:06.565 "bdev_name": "Nvme0n1" 00:07:06.565 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd1", 00:07:06.566 "bdev_name": "Nvme1n1p1" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd10", 00:07:06.566 "bdev_name": "Nvme1n1p2" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd11", 00:07:06.566 "bdev_name": "Nvme2n1" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd12", 00:07:06.566 "bdev_name": "Nvme2n2" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd13", 00:07:06.566 "bdev_name": "Nvme2n3" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd14", 00:07:06.566 "bdev_name": "Nvme3n1" 00:07:06.566 } 00:07:06.566 ]' 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd0", 00:07:06.566 "bdev_name": "Nvme0n1" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd1", 00:07:06.566 "bdev_name": "Nvme1n1p1" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd10", 00:07:06.566 "bdev_name": "Nvme1n1p2" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd11", 00:07:06.566 "bdev_name": "Nvme2n1" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd12", 00:07:06.566 "bdev_name": "Nvme2n2" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd13", 00:07:06.566 "bdev_name": "Nvme2n3" 00:07:06.566 }, 00:07:06.566 { 00:07:06.566 "nbd_device": "/dev/nbd14", 00:07:06.566 "bdev_name": "Nvme3n1" 00:07:06.566 } 00:07:06.566 ]' 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.566 /dev/nbd1 00:07:06.566 /dev/nbd10 00:07:06.566 /dev/nbd11 00:07:06.566 /dev/nbd12 00:07:06.566 /dev/nbd13 00:07:06.566 /dev/nbd14' 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.566 /dev/nbd1 00:07:06.566 /dev/nbd10 00:07:06.566 /dev/nbd11 00:07:06.566 /dev/nbd12 00:07:06.566 /dev/nbd13 00:07:06.566 /dev/nbd14' 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:06.566 256+0 records in 00:07:06.566 256+0 records out 00:07:06.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121305 s, 86.4 MB/s 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.566 256+0 records in 00:07:06.566 256+0 records out 00:07:06.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0759809 s, 13.8 MB/s 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.566 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.824 256+0 records in 00:07:06.824 256+0 records out 00:07:06.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0768447 s, 13.6 MB/s 00:07:06.824 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.824 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:06.824 256+0 records in 00:07:06.824 256+0 records out 00:07:06.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0763241 s, 13.7 MB/s 00:07:06.824 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.824 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:06.824 256+0 records in 00:07:06.824 256+0 records out 00:07:06.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0800659 s, 13.1 MB/s 00:07:06.824 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.824 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:07.083 256+0 records in 00:07:07.083 256+0 records out 00:07:07.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0752651 s, 13.9 MB/s 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:07.083 256+0 records in 00:07:07.083 256+0 records out 00:07:07.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0750731 s, 14.0 MB/s 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:07.083 256+0 records in 00:07:07.083 256+0 records out 00:07:07.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0754994 s, 13.9 MB/s 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.083 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.342 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.600 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.857 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.114 06:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:08.371 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:08.371 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:08.371 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:08.371 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.371 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.372 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:08.372 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.372 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.372 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.372 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.630 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.887 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:08.888 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:09.145 malloc_lvol_verify 00:07:09.145 06:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:09.402 9846da73-cc1f-4a20-8331-38fcefd3aae8 00:07:09.402 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:09.659 cf3eae8e-3c56-437c-9e67-4940b045be44 00:07:09.659 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:09.918 /dev/nbd0 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:09.918 mke2fs 1.47.0 (5-Feb-2023) 00:07:09.918 Discarding device blocks: 0/4096 done 00:07:09.918 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:09.918 00:07:09.918 Allocating group tables: 0/1 done 00:07:09.918 Writing inode tables: 0/1 done 00:07:09.918 Creating journal (1024 blocks): done 00:07:09.918 Writing superblocks and filesystem accounting information: 0/1 done 00:07:09.918 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61489 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61489 ']' 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61489 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.918 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61489 00:07:10.176 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.176 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.176 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61489' 00:07:10.176 killing process with pid 61489 00:07:10.176 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61489 00:07:10.176 06:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61489 00:07:10.741 06:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:10.741 00:07:10.741 real 0m10.574s 00:07:10.741 user 0m15.187s 00:07:10.741 sys 0m3.484s 00:07:10.741 06:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.741 06:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:10.741 ************************************ 00:07:10.741 END TEST bdev_nbd 00:07:10.741 ************************************ 00:07:10.741 06:31:02 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:10.741 06:31:02 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:10.741 skipping fio tests on NVMe due to multi-ns failures. 00:07:10.741 06:31:02 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:10.741 06:31:02 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:10.741 06:31:02 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:10.741 06:31:02 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:10.741 06:31:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:10.741 06:31:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.741 06:31:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.741 ************************************ 00:07:10.741 START TEST bdev_verify 00:07:10.741 ************************************ 00:07:10.741 06:31:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:10.741 [2024-11-19 06:31:02.617789] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:10.741 [2024-11-19 06:31:02.617914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61899 ] 00:07:11.000 [2024-11-19 06:31:02.768877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.000 [2024-11-19 06:31:02.870195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.000 [2024-11-19 06:31:02.870243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.565 Running I/O for 5 seconds... 00:07:13.890 24192.00 IOPS, 94.50 MiB/s [2024-11-19T06:31:06.748Z] 23296.00 IOPS, 91.00 MiB/s [2024-11-19T06:31:07.677Z] 23850.67 IOPS, 93.17 MiB/s [2024-11-19T06:31:08.609Z] 23840.00 IOPS, 93.12 MiB/s [2024-11-19T06:31:08.609Z] 23897.60 IOPS, 93.35 MiB/s 00:07:16.680 Latency(us) 00:07:16.680 [2024-11-19T06:31:08.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.680 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x0 length 0xbd0bd 00:07:16.680 Nvme0n1 : 5.05 1722.92 6.73 0.00 0.00 74043.99 16736.89 77836.60 00:07:16.680 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:16.680 Nvme0n1 : 5.03 1652.55 6.46 0.00 0.00 77157.82 17644.31 87112.47 00:07:16.680 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x0 length 0x4ff80 00:07:16.680 Nvme1n1p1 : 5.05 1722.40 6.73 0.00 0.00 73940.10 18249.26 69770.63 00:07:16.680 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:16.680 Nvme1n1p1 : 5.07 1654.72 6.46 0.00 0.00 76778.37 7914.73 75013.51 00:07:16.680 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x0 length 0x4ff7f 00:07:16.680 Nvme1n1p2 : 5.06 1721.85 6.73 0.00 0.00 73830.86 17442.66 63721.16 00:07:16.680 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:16.680 Nvme1n1p2 : 5.08 1661.87 6.49 0.00 0.00 76467.64 11846.89 65737.65 00:07:16.680 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x0 length 0x80000 00:07:16.680 Nvme2n1 : 5.06 1721.37 6.72 0.00 0.00 73718.27 16938.54 58478.28 00:07:16.680 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x80000 length 0x80000 00:07:16.680 Nvme2n1 : 5.08 1661.40 6.49 0.00 0.00 76292.15 12149.37 64931.05 00:07:16.680 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x0 length 0x80000 00:07:16.680 Nvme2n2 : 5.07 1729.35 6.76 0.00 0.00 73284.15 3125.56 64527.75 00:07:16.680 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x80000 length 0x80000 00:07:16.680 Nvme2n2 : 5.09 1659.80 6.48 0.00 0.00 76135.89 15123.69 66544.25 00:07:16.680 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x0 length 0x80000 00:07:16.680 Nvme2n3 : 5.08 1737.01 6.79 0.00 0.00 72879.74 10939.47 68964.04 00:07:16.680 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x80000 length 0x80000 00:07:16.680 Nvme2n3 : 5.09 1659.36 6.48 0.00 0.00 76011.59 13409.67 67754.14 00:07:16.680 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x0 length 0x20000 00:07:16.680 Nvme3n1 : 5.09 1735.87 6.78 0.00 0.00 72765.99 9225.45 69367.34 00:07:16.680 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.680 Verification LBA range: start 0x20000 length 0x20000 00:07:16.680 Nvme3n1 : 5.09 1658.92 6.48 0.00 0.00 75947.74 10032.05 69367.34 00:07:16.680 [2024-11-19T06:31:08.609Z] =================================================================================================================== 00:07:16.680 [2024-11-19T06:31:08.609Z] Total : 23699.38 92.58 0.00 0.00 74916.35 3125.56 87112.47 00:07:18.050 00:07:18.050 real 0m7.298s 00:07:18.050 user 0m13.618s 00:07:18.050 sys 0m0.263s 00:07:18.050 06:31:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.050 06:31:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:18.050 ************************************ 00:07:18.050 END TEST bdev_verify 00:07:18.050 ************************************ 00:07:18.050 06:31:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:18.051 06:31:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:18.051 06:31:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.051 06:31:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.051 ************************************ 00:07:18.051 START TEST bdev_verify_big_io 00:07:18.051 ************************************ 00:07:18.051 06:31:09 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:18.051 [2024-11-19 06:31:09.975114] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:18.051 [2024-11-19 06:31:09.975229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61997 ] 00:07:18.308 [2024-11-19 06:31:10.135300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.565 [2024-11-19 06:31:10.250027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.566 [2024-11-19 06:31:10.250033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.132 Running I/O for 5 seconds... 00:07:24.248 1456.00 IOPS, 91.00 MiB/s [2024-11-19T06:31:17.174Z] 2633.00 IOPS, 164.56 MiB/s [2024-11-19T06:31:17.433Z] 3371.33 IOPS, 210.71 MiB/s 00:07:25.504 Latency(us) 00:07:25.504 [2024-11-19T06:31:17.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.504 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x0 length 0xbd0b 00:07:25.504 Nvme0n1 : 5.81 119.42 7.46 0.00 0.00 1016316.05 11241.94 1406705.03 00:07:25.504 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:25.504 Nvme0n1 : 5.80 90.98 5.69 0.00 0.00 1328998.66 24399.56 1716438.25 00:07:25.504 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x0 length 0x4ff8 00:07:25.504 Nvme1n1p1 : 5.81 122.63 7.66 0.00 0.00 964835.01 34280.37 1432516.14 00:07:25.504 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:25.504 Nvme1n1p1 : 5.81 120.74 7.55 0.00 0.00 986793.67 107277.39 1129235.69 00:07:25.504 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x0 length 0x4ff7 00:07:25.504 Nvme1n1p2 : 5.91 126.46 7.90 0.00 0.00 910863.59 56865.08 1458327.24 00:07:25.504 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:25.504 Nvme1n1p2 : 5.92 116.97 7.31 0.00 0.00 989124.39 55655.19 1632552.17 00:07:25.504 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x0 length 0x8000 00:07:25.504 Nvme2n1 : 6.00 131.79 8.24 0.00 0.00 847777.60 41943.04 1477685.56 00:07:25.504 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x8000 length 0x8000 00:07:25.504 Nvme2n1 : 5.93 125.21 7.83 0.00 0.00 896222.12 54848.59 1167952.34 00:07:25.504 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x0 length 0x8000 00:07:25.504 Nvme2n2 : 6.07 135.66 8.48 0.00 0.00 793500.12 44362.83 1497043.89 00:07:25.504 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x8000 length 0x8000 00:07:25.504 Nvme2n2 : 6.00 132.06 8.25 0.00 0.00 818659.47 68560.74 1187310.67 00:07:25.504 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x0 length 0x8000 00:07:25.504 Nvme2n3 : 6.15 148.16 9.26 0.00 0.00 708793.73 27021.00 1509949.44 00:07:25.504 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x8000 length 0x8000 00:07:25.504 Nvme2n3 : 6.15 146.39 9.15 0.00 0.00 715477.01 42951.29 1200216.22 00:07:25.504 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x0 length 0x2000 00:07:25.504 Nvme3n1 : 6.21 167.88 10.49 0.00 0.00 607295.72 472.62 1535760.54 00:07:25.504 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.504 Verification LBA range: start 0x2000 length 0x2000 00:07:25.504 Nvme3n1 : 6.20 169.61 10.60 0.00 0.00 604353.69 570.29 1219574.55 00:07:25.504 [2024-11-19T06:31:17.433Z] =================================================================================================================== 00:07:25.504 [2024-11-19T06:31:17.433Z] Total : 1853.97 115.87 0.00 0.00 839975.12 472.62 1716438.25 00:07:26.878 00:07:26.878 real 0m8.857s 00:07:26.878 user 0m16.757s 00:07:26.878 sys 0m0.273s 00:07:26.878 06:31:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.878 ************************************ 00:07:26.878 END TEST bdev_verify_big_io 00:07:26.878 06:31:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:26.878 ************************************ 00:07:26.878 06:31:18 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.878 06:31:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:26.878 06:31:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.878 06:31:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.137 ************************************ 00:07:27.137 START TEST bdev_write_zeroes 00:07:27.137 ************************************ 00:07:27.137 06:31:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.137 [2024-11-19 06:31:18.871987] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:27.137 [2024-11-19 06:31:18.872084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62106 ] 00:07:27.138 [2024-11-19 06:31:19.025027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.395 [2024-11-19 06:31:19.118316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.959 Running I/O for 1 seconds... 00:07:28.891 69888.00 IOPS, 273.00 MiB/s 00:07:28.891 Latency(us) 00:07:28.891 [2024-11-19T06:31:20.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.891 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.891 Nvme0n1 : 1.03 9926.71 38.78 0.00 0.00 12866.09 6326.74 25407.80 00:07:28.891 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.891 Nvme1n1p1 : 1.03 9914.44 38.73 0.00 0.00 12861.12 11141.12 25004.50 00:07:28.891 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.891 Nvme1n1p2 : 1.03 9902.13 38.68 0.00 0.00 12846.17 11141.12 24197.91 00:07:28.891 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.891 Nvme2n1 : 1.03 9890.92 38.64 0.00 0.00 12840.23 10788.23 23391.31 00:07:28.891 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.891 Nvme2n2 : 1.03 9879.82 38.59 0.00 0.00 12814.62 8721.33 22988.01 00:07:28.891 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.891 Nvme2n3 : 1.03 9868.71 38.55 0.00 0.00 12790.45 6654.42 23895.43 00:07:28.891 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.891 Nvme3n1 : 1.03 9857.63 38.51 0.00 0.00 12782.82 6452.78 25609.45 00:07:28.891 [2024-11-19T06:31:20.820Z] =================================================================================================================== 00:07:28.891 [2024-11-19T06:31:20.820Z] Total : 69240.36 270.47 0.00 0.00 12828.79 6326.74 25609.45 00:07:29.824 00:07:29.824 real 0m2.674s 00:07:29.824 user 0m2.366s 00:07:29.824 sys 0m0.194s 00:07:29.824 06:31:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.824 06:31:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:29.824 ************************************ 00:07:29.824 END TEST bdev_write_zeroes 00:07:29.824 ************************************ 00:07:29.824 06:31:21 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.824 06:31:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:29.824 06:31:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.824 06:31:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.824 ************************************ 00:07:29.824 START TEST bdev_json_nonenclosed 00:07:29.824 ************************************ 00:07:29.824 06:31:21 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.824 [2024-11-19 06:31:21.601452] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:29.824 [2024-11-19 06:31:21.601590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62159 ] 00:07:30.081 [2024-11-19 06:31:21.763325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.081 [2024-11-19 06:31:21.881238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.081 [2024-11-19 06:31:21.881333] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:30.081 [2024-11-19 06:31:21.881351] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:30.081 [2024-11-19 06:31:21.881361] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.338 00:07:30.338 real 0m0.529s 00:07:30.338 user 0m0.331s 00:07:30.338 sys 0m0.094s 00:07:30.338 06:31:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.338 06:31:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:30.338 ************************************ 00:07:30.338 END TEST bdev_json_nonenclosed 00:07:30.338 ************************************ 00:07:30.338 06:31:22 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.338 06:31:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:30.338 06:31:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.338 06:31:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.338 ************************************ 00:07:30.338 START TEST bdev_json_nonarray 00:07:30.338 ************************************ 00:07:30.338 06:31:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.338 [2024-11-19 06:31:22.183947] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:30.338 [2024-11-19 06:31:22.184096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62179 ] 00:07:30.595 [2024-11-19 06:31:22.354542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.595 [2024-11-19 06:31:22.471578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.596 [2024-11-19 06:31:22.471684] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:30.596 [2024-11-19 06:31:22.471704] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:30.596 [2024-11-19 06:31:22.471714] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.854 00:07:30.854 real 0m0.555s 00:07:30.854 user 0m0.331s 00:07:30.854 sys 0m0.119s 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:30.854 ************************************ 00:07:30.854 END TEST bdev_json_nonarray 00:07:30.854 ************************************ 00:07:30.854 06:31:22 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:30.854 06:31:22 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:30.854 06:31:22 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:30.854 06:31:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.854 06:31:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.854 06:31:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.854 ************************************ 00:07:30.854 START TEST bdev_gpt_uuid 00:07:30.854 ************************************ 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62210 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62210 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62210 ']' 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.854 06:31:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:31.112 [2024-11-19 06:31:22.795221] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:31.112 [2024-11-19 06:31:22.795339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62210 ] 00:07:31.112 [2024-11-19 06:31:22.953715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.370 [2024-11-19 06:31:23.071848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.935 06:31:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.935 06:31:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:31.935 06:31:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:31.935 06:31:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.935 06:31:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:32.192 Some configs were skipped because the RPC state that can call them passed over. 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:32.192 { 00:07:32.192 "name": "Nvme1n1p1", 00:07:32.192 "aliases": [ 00:07:32.192 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:32.192 ], 00:07:32.192 "product_name": "GPT Disk", 00:07:32.192 "block_size": 4096, 00:07:32.192 "num_blocks": 655104, 00:07:32.192 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:32.192 "assigned_rate_limits": { 00:07:32.192 "rw_ios_per_sec": 0, 00:07:32.192 "rw_mbytes_per_sec": 0, 00:07:32.192 "r_mbytes_per_sec": 0, 00:07:32.192 "w_mbytes_per_sec": 0 00:07:32.192 }, 00:07:32.192 "claimed": false, 00:07:32.192 "zoned": false, 00:07:32.192 "supported_io_types": { 00:07:32.192 "read": true, 00:07:32.192 "write": true, 00:07:32.192 "unmap": true, 00:07:32.192 "flush": true, 00:07:32.192 "reset": true, 00:07:32.192 "nvme_admin": false, 00:07:32.192 "nvme_io": false, 00:07:32.192 "nvme_io_md": false, 00:07:32.192 "write_zeroes": true, 00:07:32.192 "zcopy": false, 00:07:32.192 "get_zone_info": false, 00:07:32.192 "zone_management": false, 00:07:32.192 "zone_append": false, 00:07:32.192 "compare": true, 00:07:32.192 "compare_and_write": false, 00:07:32.192 "abort": true, 00:07:32.192 "seek_hole": false, 00:07:32.192 "seek_data": false, 00:07:32.192 "copy": true, 00:07:32.192 "nvme_iov_md": false 00:07:32.192 }, 00:07:32.192 "driver_specific": { 00:07:32.192 "gpt": { 00:07:32.192 "base_bdev": "Nvme1n1", 00:07:32.192 "offset_blocks": 256, 00:07:32.192 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:32.192 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:32.192 "partition_name": "SPDK_TEST_first" 00:07:32.192 } 00:07:32.192 } 00:07:32.192 } 00:07:32.192 ]' 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:32.192 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:32.450 { 00:07:32.450 "name": "Nvme1n1p2", 00:07:32.450 "aliases": [ 00:07:32.450 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:32.450 ], 00:07:32.450 "product_name": "GPT Disk", 00:07:32.450 "block_size": 4096, 00:07:32.450 "num_blocks": 655103, 00:07:32.450 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:32.450 "assigned_rate_limits": { 00:07:32.450 "rw_ios_per_sec": 0, 00:07:32.450 "rw_mbytes_per_sec": 0, 00:07:32.450 "r_mbytes_per_sec": 0, 00:07:32.450 "w_mbytes_per_sec": 0 00:07:32.450 }, 00:07:32.450 "claimed": false, 00:07:32.450 "zoned": false, 00:07:32.450 "supported_io_types": { 00:07:32.450 "read": true, 00:07:32.450 "write": true, 00:07:32.450 "unmap": true, 00:07:32.450 "flush": true, 00:07:32.450 "reset": true, 00:07:32.450 "nvme_admin": false, 00:07:32.450 "nvme_io": false, 00:07:32.450 "nvme_io_md": false, 00:07:32.450 "write_zeroes": true, 00:07:32.450 "zcopy": false, 00:07:32.450 "get_zone_info": false, 00:07:32.450 "zone_management": false, 00:07:32.450 "zone_append": false, 00:07:32.450 "compare": true, 00:07:32.450 "compare_and_write": false, 00:07:32.450 "abort": true, 00:07:32.450 "seek_hole": false, 00:07:32.450 "seek_data": false, 00:07:32.450 "copy": true, 00:07:32.450 "nvme_iov_md": false 00:07:32.450 }, 00:07:32.450 "driver_specific": { 00:07:32.450 "gpt": { 00:07:32.450 "base_bdev": "Nvme1n1", 00:07:32.450 "offset_blocks": 655360, 00:07:32.450 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:32.450 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:32.450 "partition_name": "SPDK_TEST_second" 00:07:32.450 } 00:07:32.450 } 00:07:32.450 } 00:07:32.450 ]' 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:32.450 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62210 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62210 ']' 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62210 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62210 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.451 killing process with pid 62210 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62210' 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62210 00:07:32.451 06:31:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62210 00:07:33.824 00:07:33.824 real 0m2.848s 00:07:33.824 user 0m2.932s 00:07:33.824 sys 0m0.414s 00:07:33.824 06:31:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.824 06:31:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.824 ************************************ 00:07:33.824 END TEST bdev_gpt_uuid 00:07:33.824 ************************************ 00:07:33.824 06:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:33.824 06:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:33.824 06:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:33.824 06:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:33.824 06:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:33.824 06:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:33.824 06:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:33.824 06:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:33.824 06:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:34.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:34.339 Waiting for block devices as requested 00:07:34.339 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.339 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.339 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.596 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.868 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:39.868 06:31:31 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:39.868 06:31:31 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:39.868 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:39.868 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:39.868 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:39.868 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:39.868 06:31:31 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:39.868 00:07:39.868 real 0m55.450s 00:07:39.868 user 1m11.152s 00:07:39.868 sys 0m7.905s 00:07:39.868 06:31:31 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.868 06:31:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.868 ************************************ 00:07:39.868 END TEST blockdev_nvme_gpt 00:07:39.868 ************************************ 00:07:39.868 06:31:31 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:39.868 06:31:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.868 06:31:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.868 06:31:31 -- common/autotest_common.sh@10 -- # set +x 00:07:39.868 ************************************ 00:07:39.868 START TEST nvme 00:07:39.868 ************************************ 00:07:39.868 06:31:31 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:39.868 * Looking for test storage... 00:07:39.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:39.868 06:31:31 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.868 06:31:31 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.868 06:31:31 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.868 06:31:31 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.868 06:31:31 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.868 06:31:31 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.868 06:31:31 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.868 06:31:31 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.868 06:31:31 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.868 06:31:31 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.868 06:31:31 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.868 06:31:31 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.868 06:31:31 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.868 06:31:31 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.868 06:31:31 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.868 06:31:31 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:39.868 06:31:31 nvme -- scripts/common.sh@345 -- # : 1 00:07:39.868 06:31:31 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.868 06:31:31 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.868 06:31:31 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:39.868 06:31:31 nvme -- scripts/common.sh@353 -- # local d=1 00:07:39.868 06:31:31 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.868 06:31:31 nvme -- scripts/common.sh@355 -- # echo 1 00:07:39.868 06:31:31 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.125 06:31:31 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:40.125 06:31:31 nvme -- scripts/common.sh@353 -- # local d=2 00:07:40.125 06:31:31 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.125 06:31:31 nvme -- scripts/common.sh@355 -- # echo 2 00:07:40.125 06:31:31 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.125 06:31:31 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.125 06:31:31 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.125 06:31:31 nvme -- scripts/common.sh@368 -- # return 0 00:07:40.125 06:31:31 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.125 06:31:31 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:40.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.125 --rc genhtml_branch_coverage=1 00:07:40.125 --rc genhtml_function_coverage=1 00:07:40.125 --rc genhtml_legend=1 00:07:40.125 --rc geninfo_all_blocks=1 00:07:40.125 --rc geninfo_unexecuted_blocks=1 00:07:40.125 00:07:40.125 ' 00:07:40.125 06:31:31 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:40.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.125 --rc genhtml_branch_coverage=1 00:07:40.125 --rc genhtml_function_coverage=1 00:07:40.125 --rc genhtml_legend=1 00:07:40.125 --rc geninfo_all_blocks=1 00:07:40.125 --rc geninfo_unexecuted_blocks=1 00:07:40.125 00:07:40.125 ' 00:07:40.125 06:31:31 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:40.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.125 --rc genhtml_branch_coverage=1 00:07:40.125 --rc genhtml_function_coverage=1 00:07:40.125 --rc genhtml_legend=1 00:07:40.125 --rc geninfo_all_blocks=1 00:07:40.125 --rc geninfo_unexecuted_blocks=1 00:07:40.125 00:07:40.125 ' 00:07:40.125 06:31:31 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:40.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.125 --rc genhtml_branch_coverage=1 00:07:40.125 --rc genhtml_function_coverage=1 00:07:40.125 --rc genhtml_legend=1 00:07:40.125 --rc geninfo_all_blocks=1 00:07:40.125 --rc geninfo_unexecuted_blocks=1 00:07:40.125 00:07:40.125 ' 00:07:40.125 06:31:31 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:40.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:40.947 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.947 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.947 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.947 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.947 06:31:32 nvme -- nvme/nvme.sh@79 -- # uname 00:07:40.947 06:31:32 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:40.947 06:31:32 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:40.947 06:31:32 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:40.947 06:31:32 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:40.947 06:31:32 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:40.947 06:31:32 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:40.947 06:31:32 nvme -- common/autotest_common.sh@1075 -- # stubpid=62845 00:07:40.947 Waiting for stub to ready for secondary processes... 00:07:40.947 06:31:32 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:40.947 06:31:32 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:40.947 06:31:32 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:40.947 06:31:32 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62845 ]] 00:07:40.947 06:31:32 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:40.947 [2024-11-19 06:31:32.829820] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:40.947 [2024-11-19 06:31:32.829954] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:41.880 [2024-11-19 06:31:33.664848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.880 [2024-11-19 06:31:33.775266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.880 [2024-11-19 06:31:33.775553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.880 [2024-11-19 06:31:33.775578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.880 [2024-11-19 06:31:33.794742] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:41.880 [2024-11-19 06:31:33.795050] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.880 06:31:33 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:41.880 06:31:33 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62845 ]] 00:07:41.880 06:31:33 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:41.880 [2024-11-19 06:31:33.804514] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:41.880 [2024-11-19 06:31:33.804615] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:41.880 [2024-11-19 06:31:33.807420] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.880 [2024-11-19 06:31:33.807590] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:41.880 [2024-11-19 06:31:33.807647] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:41.880 [2024-11-19 06:31:33.809883] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.880 [2024-11-19 06:31:33.810073] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:41.880 [2024-11-19 06:31:33.810148] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:42.136 [2024-11-19 06:31:33.813413] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:42.136 [2024-11-19 06:31:33.813635] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:42.136 [2024-11-19 06:31:33.813716] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:42.136 [2024-11-19 06:31:33.813774] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:42.136 [2024-11-19 06:31:33.813820] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:43.069 done. 00:07:43.069 06:31:34 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:43.069 06:31:34 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:43.069 06:31:34 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:43.069 06:31:34 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:43.069 06:31:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.069 06:31:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.069 ************************************ 00:07:43.069 START TEST nvme_reset 00:07:43.069 ************************************ 00:07:43.069 06:31:34 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:43.327 Initializing NVMe Controllers 00:07:43.327 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:43.327 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:43.327 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:43.327 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:43.327 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:43.327 00:07:43.327 real 0m0.230s 00:07:43.327 user 0m0.070s 00:07:43.327 sys 0m0.107s 00:07:43.327 06:31:35 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.327 06:31:35 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:43.327 ************************************ 00:07:43.327 END TEST nvme_reset 00:07:43.327 ************************************ 00:07:43.327 06:31:35 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:43.327 06:31:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.327 06:31:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.327 06:31:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.327 ************************************ 00:07:43.327 START TEST nvme_identify 00:07:43.327 ************************************ 00:07:43.328 06:31:35 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:43.328 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:43.328 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:43.328 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:43.328 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:43.328 06:31:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:43.328 06:31:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:43.328 06:31:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:43.328 06:31:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:43.328 06:31:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:43.328 06:31:35 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:43.328 06:31:35 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:43.328 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:43.587 [2024-11-19 06:31:35.314387] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62874 terminated unexpected 00:07:43.587 ===================================================== 00:07:43.587 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:43.587 ===================================================== 00:07:43.587 Controller Capabilities/Features 00:07:43.587 ================================ 00:07:43.587 Vendor ID: 1b36 00:07:43.587 Subsystem Vendor ID: 1af4 00:07:43.587 Serial Number: 12340 00:07:43.587 Model Number: QEMU NVMe Ctrl 00:07:43.587 Firmware Version: 8.0.0 00:07:43.587 Recommended Arb Burst: 6 00:07:43.588 IEEE OUI Identifier: 00 54 52 00:07:43.588 Multi-path I/O 00:07:43.588 May have multiple subsystem ports: No 00:07:43.588 May have multiple controllers: No 00:07:43.588 Associated with SR-IOV VF: No 00:07:43.588 Max Data Transfer Size: 524288 00:07:43.588 Max Number of Namespaces: 256 00:07:43.588 Max Number of I/O Queues: 64 00:07:43.588 NVMe Specification Version (VS): 1.4 00:07:43.588 NVMe Specification Version (Identify): 1.4 00:07:43.588 Maximum Queue Entries: 2048 00:07:43.588 Contiguous Queues Required: Yes 00:07:43.588 Arbitration Mechanisms Supported 00:07:43.588 Weighted Round Robin: Not Supported 00:07:43.588 Vendor Specific: Not Supported 00:07:43.588 Reset Timeout: 7500 ms 00:07:43.588 Doorbell Stride: 4 bytes 00:07:43.588 NVM Subsystem Reset: Not Supported 00:07:43.588 Command Sets Supported 00:07:43.588 NVM Command Set: Supported 00:07:43.588 Boot Partition: Not Supported 00:07:43.588 Memory Page Size Minimum: 4096 bytes 00:07:43.588 Memory Page Size Maximum: 65536 bytes 00:07:43.588 Persistent Memory Region: Not Supported 00:07:43.588 Optional Asynchronous Events Supported 00:07:43.588 Namespace Attribute Notices: Supported 00:07:43.588 Firmware Activation Notices: Not Supported 00:07:43.588 ANA Change Notices: Not Supported 00:07:43.588 PLE Aggregate Log Change Notices: Not Supported 00:07:43.588 LBA Status Info Alert Notices: Not Supported 00:07:43.588 EGE Aggregate Log Change Notices: Not Supported 00:07:43.588 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.588 Zone Descriptor Change Notices: Not Supported 00:07:43.588 Discovery Log Change Notices: Not Supported 00:07:43.588 Controller Attributes 00:07:43.588 128-bit Host Identifier: Not Supported 00:07:43.588 Non-Operational Permissive Mode: Not Supported 00:07:43.588 NVM Sets: Not Supported 00:07:43.588 Read Recovery Levels: Not Supported 00:07:43.588 Endurance Groups: Not Supported 00:07:43.588 Predictable Latency Mode: Not Supported 00:07:43.588 Traffic Based Keep ALive: Not Supported 00:07:43.588 Namespace Granularity: Not Supported 00:07:43.588 SQ Associations: Not Supported 00:07:43.588 UUID List: Not Supported 00:07:43.588 Multi-Domain Subsystem: Not Supported 00:07:43.588 Fixed Capacity Management: Not Supported 00:07:43.588 Variable Capacity Management: Not Supported 00:07:43.588 Delete Endurance Group: Not Supported 00:07:43.588 Delete NVM Set: Not Supported 00:07:43.588 Extended LBA Formats Supported: Supported 00:07:43.588 Flexible Data Placement Supported: Not Supported 00:07:43.588 00:07:43.588 Controller Memory Buffer Support 00:07:43.588 ================================ 00:07:43.588 Supported: No 00:07:43.588 00:07:43.588 Persistent Memory Region Support 00:07:43.588 ================================ 00:07:43.588 Supported: No 00:07:43.588 00:07:43.588 Admin Command Set Attributes 00:07:43.588 ============================ 00:07:43.588 Security Send/Receive: Not Supported 00:07:43.588 Format NVM: Supported 00:07:43.588 Firmware Activate/Download: Not Supported 00:07:43.588 Namespace Management: Supported 00:07:43.588 Device Self-Test: Not Supported 00:07:43.588 Directives: Supported 00:07:43.588 NVMe-MI: Not Supported 00:07:43.588 Virtualization Management: Not Supported 00:07:43.588 Doorbell Buffer Config: Supported 00:07:43.588 Get LBA Status Capability: Not Supported 00:07:43.588 Command & Feature Lockdown Capability: Not Supported 00:07:43.588 Abort Command Limit: 4 00:07:43.588 Async Event Request Limit: 4 00:07:43.588 Number of Firmware Slots: N/A 00:07:43.588 Firmware Slot 1 Read-Only: N/A 00:07:43.588 Firmware Activation Without Reset: N/A 00:07:43.588 Multiple Update Detection Support: N/A 00:07:43.588 Firmware Update Granularity: No Information Provided 00:07:43.588 Per-Namespace SMART Log: Yes 00:07:43.588 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.588 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:43.588 Command Effects Log Page: Supported 00:07:43.588 Get Log Page Extended Data: Supported 00:07:43.588 Telemetry Log Pages: Not Supported 00:07:43.588 Persistent Event Log Pages: Not Supported 00:07:43.588 Supported Log Pages Log Page: May Support 00:07:43.588 Commands Supported & Effects Log Page: Not Supported 00:07:43.588 Feature Identifiers & Effects Log Page:May Support 00:07:43.588 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.588 Data Area 4 for Telemetry Log: Not Supported 00:07:43.588 Error Log Page Entries Supported: 1 00:07:43.588 Keep Alive: Not Supported 00:07:43.588 00:07:43.588 NVM Command Set Attributes 00:07:43.588 ========================== 00:07:43.588 Submission Queue Entry Size 00:07:43.588 Max: 64 00:07:43.588 Min: 64 00:07:43.588 Completion Queue Entry Size 00:07:43.588 Max: 16 00:07:43.588 Min: 16 00:07:43.588 Number of Namespaces: 256 00:07:43.588 Compare Command: Supported 00:07:43.588 Write Uncorrectable Command: Not Supported 00:07:43.588 Dataset Management Command: Supported 00:07:43.588 Write Zeroes Command: Supported 00:07:43.588 Set Features Save Field: Supported 00:07:43.588 Reservations: Not Supported 00:07:43.588 Timestamp: Supported 00:07:43.588 Copy: Supported 00:07:43.588 Volatile Write Cache: Present 00:07:43.588 Atomic Write Unit (Normal): 1 00:07:43.588 Atomic Write Unit (PFail): 1 00:07:43.588 Atomic Compare & Write Unit: 1 00:07:43.588 Fused Compare & Write: Not Supported 00:07:43.588 Scatter-Gather List 00:07:43.588 SGL Command Set: Supported 00:07:43.588 SGL Keyed: Not Supported 00:07:43.588 SGL Bit Bucket Descriptor: Not Supported 00:07:43.588 SGL Metadata Pointer: Not Supported 00:07:43.588 Oversized SGL: Not Supported 00:07:43.588 SGL Metadata Address: Not Supported 00:07:43.588 SGL Offset: Not Supported 00:07:43.588 Transport SGL Data Block: Not Supported 00:07:43.588 Replay Protected Memory Block: Not Supported 00:07:43.588 00:07:43.588 Firmware Slot Information 00:07:43.588 ========================= 00:07:43.588 Active slot: 1 00:07:43.588 Slot 1 Firmware Revision: 1.0 00:07:43.588 00:07:43.588 00:07:43.588 Commands Supported and Effects 00:07:43.588 ============================== 00:07:43.588 Admin Commands 00:07:43.588 -------------- 00:07:43.588 Delete I/O Submission Queue (00h): Supported 00:07:43.588 Create I/O Submission Queue (01h): Supported 00:07:43.588 Get Log Page (02h): Supported 00:07:43.588 Delete I/O Completion Queue (04h): Supported 00:07:43.588 Create I/O Completion Queue (05h): Supported 00:07:43.588 Identify (06h): Supported 00:07:43.588 Abort (08h): Supported 00:07:43.588 Set Features (09h): Supported 00:07:43.588 Get Features (0Ah): Supported 00:07:43.588 Asynchronous Event Request (0Ch): Supported 00:07:43.588 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.588 Directive Send (19h): Supported 00:07:43.588 Directive Receive (1Ah): Supported 00:07:43.588 Virtualization Management (1Ch): Supported 00:07:43.588 Doorbell Buffer Config (7Ch): Supported 00:07:43.588 Format NVM (80h): Supported LBA-Change 00:07:43.588 I/O Commands 00:07:43.588 ------------ 00:07:43.588 Flush (00h): Supported LBA-Change 00:07:43.588 Write (01h): Supported LBA-Change 00:07:43.588 Read (02h): Supported 00:07:43.588 Compare (05h): Supported 00:07:43.588 Write Zeroes (08h): Supported LBA-Change 00:07:43.588 Dataset Management (09h): Supported LBA-Change 00:07:43.588 Unknown (0Ch): Supported 00:07:43.588 Unknown (12h): Supported 00:07:43.588 Copy (19h): Supported LBA-Change 00:07:43.588 Unknown (1Dh): Supported LBA-Change 00:07:43.588 00:07:43.588 Error Log 00:07:43.588 ========= 00:07:43.588 00:07:43.588 Arbitration 00:07:43.588 =========== 00:07:43.588 Arbitration Burst: no limit 00:07:43.588 00:07:43.588 Power Management 00:07:43.588 ================ 00:07:43.588 Number of Power States: 1 00:07:43.588 Current Power State: Power State #0 00:07:43.588 Power State #0: 00:07:43.588 Max Power: 25.00 W 00:07:43.588 Non-Operational State: Operational 00:07:43.588 Entry Latency: 16 microseconds 00:07:43.588 Exit Latency: 4 microseconds 00:07:43.588 Relative Read Throughput: 0 00:07:43.588 Relative Read Latency: 0 00:07:43.588 Relative Write Throughput: 0 00:07:43.588 Relative Write Latency: 0 00:07:43.589 Idle Power[2024-11-19 06:31:35.315708] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62874 terminated unexpected 00:07:43.589 : Not Reported 00:07:43.589 Active Power: Not Reported 00:07:43.589 Non-Operational Permissive Mode: Not Supported 00:07:43.589 00:07:43.589 Health Information 00:07:43.589 ================== 00:07:43.589 Critical Warnings: 00:07:43.589 Available Spare Space: OK 00:07:43.589 Temperature: OK 00:07:43.589 Device Reliability: OK 00:07:43.589 Read Only: No 00:07:43.589 Volatile Memory Backup: OK 00:07:43.589 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.589 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.589 Available Spare: 0% 00:07:43.589 Available Spare Threshold: 0% 00:07:43.589 Life Percentage Used: 0% 00:07:43.589 Data Units Read: 695 00:07:43.589 Data Units Written: 623 00:07:43.589 Host Read Commands: 39290 00:07:43.589 Host Write Commands: 39076 00:07:43.589 Controller Busy Time: 0 minutes 00:07:43.589 Power Cycles: 0 00:07:43.589 Power On Hours: 0 hours 00:07:43.589 Unsafe Shutdowns: 0 00:07:43.589 Unrecoverable Media Errors: 0 00:07:43.589 Lifetime Error Log Entries: 0 00:07:43.589 Warning Temperature Time: 0 minutes 00:07:43.589 Critical Temperature Time: 0 minutes 00:07:43.589 00:07:43.589 Number of Queues 00:07:43.589 ================ 00:07:43.589 Number of I/O Submission Queues: 64 00:07:43.589 Number of I/O Completion Queues: 64 00:07:43.589 00:07:43.589 ZNS Specific Controller Data 00:07:43.589 ============================ 00:07:43.589 Zone Append Size Limit: 0 00:07:43.589 00:07:43.589 00:07:43.589 Active Namespaces 00:07:43.589 ================= 00:07:43.589 Namespace ID:1 00:07:43.589 Error Recovery Timeout: Unlimited 00:07:43.589 Command Set Identifier: NVM (00h) 00:07:43.589 Deallocate: Supported 00:07:43.589 Deallocated/Unwritten Error: Supported 00:07:43.589 Deallocated Read Value: All 0x00 00:07:43.589 Deallocate in Write Zeroes: Not Supported 00:07:43.589 Deallocated Guard Field: 0xFFFF 00:07:43.589 Flush: Supported 00:07:43.589 Reservation: Not Supported 00:07:43.589 Metadata Transferred as: Separate Metadata Buffer 00:07:43.589 Namespace Sharing Capabilities: Private 00:07:43.589 Size (in LBAs): 1548666 (5GiB) 00:07:43.589 Capacity (in LBAs): 1548666 (5GiB) 00:07:43.589 Utilization (in LBAs): 1548666 (5GiB) 00:07:43.589 Thin Provisioning: Not Supported 00:07:43.589 Per-NS Atomic Units: No 00:07:43.589 Maximum Single Source Range Length: 128 00:07:43.589 Maximum Copy Length: 128 00:07:43.589 Maximum Source Range Count: 128 00:07:43.589 NGUID/EUI64 Never Reused: No 00:07:43.589 Namespace Write Protected: No 00:07:43.589 Number of LBA Formats: 8 00:07:43.589 Current LBA Format: LBA Format #07 00:07:43.589 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.589 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.589 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.589 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.589 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.589 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.589 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.589 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.589 00:07:43.589 NVM Specific Namespace Data 00:07:43.589 =========================== 00:07:43.589 Logical Block Storage Tag Mask: 0 00:07:43.589 Protection Information Capabilities: 00:07:43.589 16b Guard Protection Information Storage Tag Support: No 00:07:43.589 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.589 Storage Tag Check Read Support: No 00:07:43.589 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.589 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.589 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.589 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.589 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.589 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.589 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.589 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.589 ===================================================== 00:07:43.589 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:43.589 ===================================================== 00:07:43.589 Controller Capabilities/Features 00:07:43.589 ================================ 00:07:43.589 Vendor ID: 1b36 00:07:43.589 Subsystem Vendor ID: 1af4 00:07:43.589 Serial Number: 12341 00:07:43.589 Model Number: QEMU NVMe Ctrl 00:07:43.589 Firmware Version: 8.0.0 00:07:43.589 Recommended Arb Burst: 6 00:07:43.589 IEEE OUI Identifier: 00 54 52 00:07:43.589 Multi-path I/O 00:07:43.589 May have multiple subsystem ports: No 00:07:43.589 May have multiple controllers: No 00:07:43.589 Associated with SR-IOV VF: No 00:07:43.589 Max Data Transfer Size: 524288 00:07:43.589 Max Number of Namespaces: 256 00:07:43.589 Max Number of I/O Queues: 64 00:07:43.589 NVMe Specification Version (VS): 1.4 00:07:43.589 NVMe Specification Version (Identify): 1.4 00:07:43.589 Maximum Queue Entries: 2048 00:07:43.589 Contiguous Queues Required: Yes 00:07:43.589 Arbitration Mechanisms Supported 00:07:43.589 Weighted Round Robin: Not Supported 00:07:43.589 Vendor Specific: Not Supported 00:07:43.589 Reset Timeout: 7500 ms 00:07:43.589 Doorbell Stride: 4 bytes 00:07:43.589 NVM Subsystem Reset: Not Supported 00:07:43.589 Command Sets Supported 00:07:43.589 NVM Command Set: Supported 00:07:43.589 Boot Partition: Not Supported 00:07:43.589 Memory Page Size Minimum: 4096 bytes 00:07:43.589 Memory Page Size Maximum: 65536 bytes 00:07:43.589 Persistent Memory Region: Not Supported 00:07:43.589 Optional Asynchronous Events Supported 00:07:43.589 Namespace Attribute Notices: Supported 00:07:43.589 Firmware Activation Notices: Not Supported 00:07:43.589 ANA Change Notices: Not Supported 00:07:43.589 PLE Aggregate Log Change Notices: Not Supported 00:07:43.589 LBA Status Info Alert Notices: Not Supported 00:07:43.589 EGE Aggregate Log Change Notices: Not Supported 00:07:43.589 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.589 Zone Descriptor Change Notices: Not Supported 00:07:43.589 Discovery Log Change Notices: Not Supported 00:07:43.589 Controller Attributes 00:07:43.589 128-bit Host Identifier: Not Supported 00:07:43.589 Non-Operational Permissive Mode: Not Supported 00:07:43.589 NVM Sets: Not Supported 00:07:43.589 Read Recovery Levels: Not Supported 00:07:43.589 Endurance Groups: Not Supported 00:07:43.589 Predictable Latency Mode: Not Supported 00:07:43.589 Traffic Based Keep ALive: Not Supported 00:07:43.589 Namespace Granularity: Not Supported 00:07:43.589 SQ Associations: Not Supported 00:07:43.589 UUID List: Not Supported 00:07:43.589 Multi-Domain Subsystem: Not Supported 00:07:43.589 Fixed Capacity Management: Not Supported 00:07:43.589 Variable Capacity Management: Not Supported 00:07:43.589 Delete Endurance Group: Not Supported 00:07:43.589 Delete NVM Set: Not Supported 00:07:43.589 Extended LBA Formats Supported: Supported 00:07:43.589 Flexible Data Placement Supported: Not Supported 00:07:43.589 00:07:43.589 Controller Memory Buffer Support 00:07:43.589 ================================ 00:07:43.589 Supported: No 00:07:43.589 00:07:43.589 Persistent Memory Region Support 00:07:43.589 ================================ 00:07:43.589 Supported: No 00:07:43.589 00:07:43.589 Admin Command Set Attributes 00:07:43.589 ============================ 00:07:43.589 Security Send/Receive: Not Supported 00:07:43.589 Format NVM: Supported 00:07:43.589 Firmware Activate/Download: Not Supported 00:07:43.589 Namespace Management: Supported 00:07:43.589 Device Self-Test: Not Supported 00:07:43.589 Directives: Supported 00:07:43.589 NVMe-MI: Not Supported 00:07:43.589 Virtualization Management: Not Supported 00:07:43.589 Doorbell Buffer Config: Supported 00:07:43.589 Get LBA Status Capability: Not Supported 00:07:43.589 Command & Feature Lockdown Capability: Not Supported 00:07:43.589 Abort Command Limit: 4 00:07:43.589 Async Event Request Limit: 4 00:07:43.589 Number of Firmware Slots: N/A 00:07:43.589 Firmware Slot 1 Read-Only: N/A 00:07:43.589 Firmware Activation Without Reset: N/A 00:07:43.589 Multiple Update Detection Support: N/A 00:07:43.589 Firmware Update Granularity: No Information Provided 00:07:43.589 Per-Namespace SMART Log: Yes 00:07:43.589 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.589 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:43.590 Command Effects Log Page: Supported 00:07:43.590 Get Log Page Extended Data: Supported 00:07:43.590 Telemetry Log Pages: Not Supported 00:07:43.590 Persistent Event Log Pages: Not Supported 00:07:43.590 Supported Log Pages Log Page: May Support 00:07:43.590 Commands Supported & Effects Log Page: Not Supported 00:07:43.590 Feature Identifiers & Effects Log Page:May Support 00:07:43.590 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.590 Data Area 4 for Telemetry Log: Not Supported 00:07:43.590 Error Log Page Entries Supported: 1 00:07:43.590 Keep Alive: Not Supported 00:07:43.590 00:07:43.590 NVM Command Set Attributes 00:07:43.590 ========================== 00:07:43.590 Submission Queue Entry Size 00:07:43.590 Max: 64 00:07:43.590 Min: 64 00:07:43.590 Completion Queue Entry Size 00:07:43.590 Max: 16 00:07:43.590 Min: 16 00:07:43.590 Number of Namespaces: 256 00:07:43.590 Compare Command: Supported 00:07:43.590 Write Uncorrectable Command: Not Supported 00:07:43.590 Dataset Management Command: Supported 00:07:43.590 Write Zeroes Command: Supported 00:07:43.590 Set Features Save Field: Supported 00:07:43.590 Reservations: Not Supported 00:07:43.590 Timestamp: Supported 00:07:43.590 Copy: Supported 00:07:43.590 Volatile Write Cache: Present 00:07:43.590 Atomic Write Unit (Normal): 1 00:07:43.590 Atomic Write Unit (PFail): 1 00:07:43.590 Atomic Compare & Write Unit: 1 00:07:43.590 Fused Compare & Write: Not Supported 00:07:43.590 Scatter-Gather List 00:07:43.590 SGL Command Set: Supported 00:07:43.590 SGL Keyed: Not Supported 00:07:43.590 SGL Bit Bucket Descriptor: Not Supported 00:07:43.590 SGL Metadata Pointer: Not Supported 00:07:43.590 Oversized SGL: Not Supported 00:07:43.590 SGL Metadata Address: Not Supported 00:07:43.590 SGL Offset: Not Supported 00:07:43.590 Transport SGL Data Block: Not Supported 00:07:43.590 Replay Protected Memory Block: Not Supported 00:07:43.590 00:07:43.590 Firmware Slot Information 00:07:43.590 ========================= 00:07:43.590 Active slot: 1 00:07:43.590 Slot 1 Firmware Revision: 1.0 00:07:43.590 00:07:43.590 00:07:43.590 Commands Supported and Effects 00:07:43.590 ============================== 00:07:43.590 Admin Commands 00:07:43.590 -------------- 00:07:43.590 Delete I/O Submission Queue (00h): Supported 00:07:43.590 Create I/O Submission Queue (01h): Supported 00:07:43.590 Get Log Page (02h): Supported 00:07:43.590 Delete I/O Completion Queue (04h): Supported 00:07:43.590 Create I/O Completion Queue (05h): Supported 00:07:43.590 Identify (06h): Supported 00:07:43.590 Abort (08h): Supported 00:07:43.590 Set Features (09h): Supported 00:07:43.590 Get Features (0Ah): Supported 00:07:43.590 Asynchronous Event Request (0Ch): Supported 00:07:43.590 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.590 Directive Send (19h): Supported 00:07:43.590 Directive Receive (1Ah): Supported 00:07:43.590 Virtualization Management (1Ch): Supported 00:07:43.590 Doorbell Buffer Config (7Ch): Supported 00:07:43.590 Format NVM (80h): Supported LBA-Change 00:07:43.590 I/O Commands 00:07:43.590 ------------ 00:07:43.590 Flush (00h): Supported LBA-Change 00:07:43.590 Write (01h): Supported LBA-Change 00:07:43.590 Read (02h): Supported 00:07:43.590 Compare (05h): Supported 00:07:43.590 Write Zeroes (08h): Supported LBA-Change 00:07:43.590 Dataset Management (09h): Supported LBA-Change 00:07:43.590 Unknown (0Ch): Supported 00:07:43.590 Unknown (12h): Supported 00:07:43.590 Copy (19h): Supported LBA-Change 00:07:43.590 Unknown (1Dh): Supported LBA-Change 00:07:43.590 00:07:43.590 Error Log 00:07:43.590 ========= 00:07:43.590 00:07:43.590 Arbitration 00:07:43.590 =========== 00:07:43.590 Arbitration Burst: no limit 00:07:43.590 00:07:43.590 Power Management 00:07:43.590 ================ 00:07:43.590 Number of Power States: 1 00:07:43.590 Current Power State: Power State #0 00:07:43.590 Power State #0: 00:07:43.590 Max Power: 25.00 W 00:07:43.590 Non-Operational State: Operational 00:07:43.590 Entry Latency: 16 microseconds 00:07:43.590 Exit Latency: 4 microseconds 00:07:43.590 Relative Read Throughput: 0 00:07:43.590 Relative Read Latency: 0 00:07:43.590 Relative Write Throughput: 0 00:07:43.590 Relative Write Latency: 0 00:07:43.590 Idle Power: Not Reported 00:07:43.590 Active Power: Not Reported 00:07:43.590 Non-Operational Permissive Mode: Not Supported 00:07:43.590 00:07:43.590 Health Information 00:07:43.590 ================== 00:07:43.590 Critical Warnings: 00:07:43.590 Available Spare Space: OK 00:07:43.590 Temperature: [2024-11-19 06:31:35.316690] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62874 terminated unexpected 00:07:43.590 OK 00:07:43.590 Device Reliability: OK 00:07:43.590 Read Only: No 00:07:43.590 Volatile Memory Backup: OK 00:07:43.590 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.590 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.590 Available Spare: 0% 00:07:43.590 Available Spare Threshold: 0% 00:07:43.590 Life Percentage Used: 0% 00:07:43.590 Data Units Read: 1109 00:07:43.590 Data Units Written: 982 00:07:43.590 Host Read Commands: 59139 00:07:43.590 Host Write Commands: 58025 00:07:43.590 Controller Busy Time: 0 minutes 00:07:43.590 Power Cycles: 0 00:07:43.590 Power On Hours: 0 hours 00:07:43.590 Unsafe Shutdowns: 0 00:07:43.590 Unrecoverable Media Errors: 0 00:07:43.590 Lifetime Error Log Entries: 0 00:07:43.590 Warning Temperature Time: 0 minutes 00:07:43.590 Critical Temperature Time: 0 minutes 00:07:43.590 00:07:43.590 Number of Queues 00:07:43.590 ================ 00:07:43.590 Number of I/O Submission Queues: 64 00:07:43.590 Number of I/O Completion Queues: 64 00:07:43.590 00:07:43.590 ZNS Specific Controller Data 00:07:43.590 ============================ 00:07:43.590 Zone Append Size Limit: 0 00:07:43.590 00:07:43.590 00:07:43.590 Active Namespaces 00:07:43.590 ================= 00:07:43.590 Namespace ID:1 00:07:43.590 Error Recovery Timeout: Unlimited 00:07:43.590 Command Set Identifier: NVM (00h) 00:07:43.590 Deallocate: Supported 00:07:43.590 Deallocated/Unwritten Error: Supported 00:07:43.590 Deallocated Read Value: All 0x00 00:07:43.590 Deallocate in Write Zeroes: Not Supported 00:07:43.590 Deallocated Guard Field: 0xFFFF 00:07:43.590 Flush: Supported 00:07:43.590 Reservation: Not Supported 00:07:43.590 Namespace Sharing Capabilities: Private 00:07:43.590 Size (in LBAs): 1310720 (5GiB) 00:07:43.590 Capacity (in LBAs): 1310720 (5GiB) 00:07:43.590 Utilization (in LBAs): 1310720 (5GiB) 00:07:43.590 Thin Provisioning: Not Supported 00:07:43.590 Per-NS Atomic Units: No 00:07:43.590 Maximum Single Source Range Length: 128 00:07:43.590 Maximum Copy Length: 128 00:07:43.590 Maximum Source Range Count: 128 00:07:43.590 NGUID/EUI64 Never Reused: No 00:07:43.590 Namespace Write Protected: No 00:07:43.590 Number of LBA Formats: 8 00:07:43.590 Current LBA Format: LBA Format #04 00:07:43.590 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.590 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.590 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.590 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.590 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.590 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.590 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.590 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.590 00:07:43.590 NVM Specific Namespace Data 00:07:43.590 =========================== 00:07:43.590 Logical Block Storage Tag Mask: 0 00:07:43.590 Protection Information Capabilities: 00:07:43.590 16b Guard Protection Information Storage Tag Support: No 00:07:43.590 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.590 Storage Tag Check Read Support: No 00:07:43.590 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.590 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.590 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.590 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.590 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.590 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.590 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.590 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.590 ===================================================== 00:07:43.590 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:43.590 ===================================================== 00:07:43.590 Controller Capabilities/Features 00:07:43.590 ================================ 00:07:43.591 Vendor ID: 1b36 00:07:43.591 Subsystem Vendor ID: 1af4 00:07:43.591 Serial Number: 12343 00:07:43.591 Model Number: QEMU NVMe Ctrl 00:07:43.591 Firmware Version: 8.0.0 00:07:43.591 Recommended Arb Burst: 6 00:07:43.591 IEEE OUI Identifier: 00 54 52 00:07:43.591 Multi-path I/O 00:07:43.591 May have multiple subsystem ports: No 00:07:43.591 May have multiple controllers: Yes 00:07:43.591 Associated with SR-IOV VF: No 00:07:43.591 Max Data Transfer Size: 524288 00:07:43.591 Max Number of Namespaces: 256 00:07:43.591 Max Number of I/O Queues: 64 00:07:43.591 NVMe Specification Version (VS): 1.4 00:07:43.591 NVMe Specification Version (Identify): 1.4 00:07:43.591 Maximum Queue Entries: 2048 00:07:43.591 Contiguous Queues Required: Yes 00:07:43.591 Arbitration Mechanisms Supported 00:07:43.591 Weighted Round Robin: Not Supported 00:07:43.591 Vendor Specific: Not Supported 00:07:43.591 Reset Timeout: 7500 ms 00:07:43.591 Doorbell Stride: 4 bytes 00:07:43.591 NVM Subsystem Reset: Not Supported 00:07:43.591 Command Sets Supported 00:07:43.591 NVM Command Set: Supported 00:07:43.591 Boot Partition: Not Supported 00:07:43.591 Memory Page Size Minimum: 4096 bytes 00:07:43.591 Memory Page Size Maximum: 65536 bytes 00:07:43.591 Persistent Memory Region: Not Supported 00:07:43.591 Optional Asynchronous Events Supported 00:07:43.591 Namespace Attribute Notices: Supported 00:07:43.591 Firmware Activation Notices: Not Supported 00:07:43.591 ANA Change Notices: Not Supported 00:07:43.591 PLE Aggregate Log Change Notices: Not Supported 00:07:43.591 LBA Status Info Alert Notices: Not Supported 00:07:43.591 EGE Aggregate Log Change Notices: Not Supported 00:07:43.591 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.591 Zone Descriptor Change Notices: Not Supported 00:07:43.591 Discovery Log Change Notices: Not Supported 00:07:43.591 Controller Attributes 00:07:43.591 128-bit Host Identifier: Not Supported 00:07:43.591 Non-Operational Permissive Mode: Not Supported 00:07:43.591 NVM Sets: Not Supported 00:07:43.591 Read Recovery Levels: Not Supported 00:07:43.591 Endurance Groups: Supported 00:07:43.591 Predictable Latency Mode: Not Supported 00:07:43.591 Traffic Based Keep ALive: Not Supported 00:07:43.591 Namespace Granularity: Not Supported 00:07:43.591 SQ Associations: Not Supported 00:07:43.591 UUID List: Not Supported 00:07:43.591 Multi-Domain Subsystem: Not Supported 00:07:43.591 Fixed Capacity Management: Not Supported 00:07:43.591 Variable Capacity Management: Not Supported 00:07:43.591 Delete Endurance Group: Not Supported 00:07:43.591 Delete NVM Set: Not Supported 00:07:43.591 Extended LBA Formats Supported: Supported 00:07:43.591 Flexible Data Placement Supported: Supported 00:07:43.591 00:07:43.591 Controller Memory Buffer Support 00:07:43.591 ================================ 00:07:43.591 Supported: No 00:07:43.591 00:07:43.591 Persistent Memory Region Support 00:07:43.591 ================================ 00:07:43.591 Supported: No 00:07:43.591 00:07:43.591 Admin Command Set Attributes 00:07:43.591 ============================ 00:07:43.591 Security Send/Receive: Not Supported 00:07:43.591 Format NVM: Supported 00:07:43.591 Firmware Activate/Download: Not Supported 00:07:43.591 Namespace Management: Supported 00:07:43.591 Device Self-Test: Not Supported 00:07:43.591 Directives: Supported 00:07:43.591 NVMe-MI: Not Supported 00:07:43.591 Virtualization Management: Not Supported 00:07:43.591 Doorbell Buffer Config: Supported 00:07:43.591 Get LBA Status Capability: Not Supported 00:07:43.591 Command & Feature Lockdown Capability: Not Supported 00:07:43.591 Abort Command Limit: 4 00:07:43.591 Async Event Request Limit: 4 00:07:43.591 Number of Firmware Slots: N/A 00:07:43.591 Firmware Slot 1 Read-Only: N/A 00:07:43.591 Firmware Activation Without Reset: N/A 00:07:43.591 Multiple Update Detection Support: N/A 00:07:43.591 Firmware Update Granularity: No Information Provided 00:07:43.591 Per-Namespace SMART Log: Yes 00:07:43.591 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.591 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:43.591 Command Effects Log Page: Supported 00:07:43.591 Get Log Page Extended Data: Supported 00:07:43.591 Telemetry Log Pages: Not Supported 00:07:43.591 Persistent Event Log Pages: Not Supported 00:07:43.591 Supported Log Pages Log Page: May Support 00:07:43.591 Commands Supported & Effects Log Page: Not Supported 00:07:43.591 Feature Identifiers & Effects Log Page:May Support 00:07:43.591 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.591 Data Area 4 for Telemetry Log: Not Supported 00:07:43.591 Error Log Page Entries Supported: 1 00:07:43.591 Keep Alive: Not Supported 00:07:43.591 00:07:43.591 NVM Command Set Attributes 00:07:43.591 ========================== 00:07:43.591 Submission Queue Entry Size 00:07:43.591 Max: 64 00:07:43.591 Min: 64 00:07:43.591 Completion Queue Entry Size 00:07:43.591 Max: 16 00:07:43.591 Min: 16 00:07:43.591 Number of Namespaces: 256 00:07:43.591 Compare Command: Supported 00:07:43.591 Write Uncorrectable Command: Not Supported 00:07:43.591 Dataset Management Command: Supported 00:07:43.591 Write Zeroes Command: Supported 00:07:43.591 Set Features Save Field: Supported 00:07:43.591 Reservations: Not Supported 00:07:43.591 Timestamp: Supported 00:07:43.591 Copy: Supported 00:07:43.591 Volatile Write Cache: Present 00:07:43.591 Atomic Write Unit (Normal): 1 00:07:43.591 Atomic Write Unit (PFail): 1 00:07:43.591 Atomic Compare & Write Unit: 1 00:07:43.591 Fused Compare & Write: Not Supported 00:07:43.591 Scatter-Gather List 00:07:43.591 SGL Command Set: Supported 00:07:43.591 SGL Keyed: Not Supported 00:07:43.591 SGL Bit Bucket Descriptor: Not Supported 00:07:43.591 SGL Metadata Pointer: Not Supported 00:07:43.591 Oversized SGL: Not Supported 00:07:43.591 SGL Metadata Address: Not Supported 00:07:43.591 SGL Offset: Not Supported 00:07:43.591 Transport SGL Data Block: Not Supported 00:07:43.591 Replay Protected Memory Block: Not Supported 00:07:43.591 00:07:43.591 Firmware Slot Information 00:07:43.591 ========================= 00:07:43.591 Active slot: 1 00:07:43.591 Slot 1 Firmware Revision: 1.0 00:07:43.591 00:07:43.591 00:07:43.591 Commands Supported and Effects 00:07:43.591 ============================== 00:07:43.591 Admin Commands 00:07:43.591 -------------- 00:07:43.591 Delete I/O Submission Queue (00h): Supported 00:07:43.591 Create I/O Submission Queue (01h): Supported 00:07:43.591 Get Log Page (02h): Supported 00:07:43.591 Delete I/O Completion Queue (04h): Supported 00:07:43.591 Create I/O Completion Queue (05h): Supported 00:07:43.591 Identify (06h): Supported 00:07:43.591 Abort (08h): Supported 00:07:43.591 Set Features (09h): Supported 00:07:43.591 Get Features (0Ah): Supported 00:07:43.591 Asynchronous Event Request (0Ch): Supported 00:07:43.591 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.591 Directive Send (19h): Supported 00:07:43.591 Directive Receive (1Ah): Supported 00:07:43.591 Virtualization Management (1Ch): Supported 00:07:43.591 Doorbell Buffer Config (7Ch): Supported 00:07:43.591 Format NVM (80h): Supported LBA-Change 00:07:43.591 I/O Commands 00:07:43.591 ------------ 00:07:43.591 Flush (00h): Supported LBA-Change 00:07:43.591 Write (01h): Supported LBA-Change 00:07:43.591 Read (02h): Supported 00:07:43.591 Compare (05h): Supported 00:07:43.591 Write Zeroes (08h): Supported LBA-Change 00:07:43.591 Dataset Management (09h): Supported LBA-Change 00:07:43.591 Unknown (0Ch): Supported 00:07:43.591 Unknown (12h): Supported 00:07:43.591 Copy (19h): Supported LBA-Change 00:07:43.591 Unknown (1Dh): Supported LBA-Change 00:07:43.591 00:07:43.591 Error Log 00:07:43.591 ========= 00:07:43.591 00:07:43.591 Arbitration 00:07:43.591 =========== 00:07:43.591 Arbitration Burst: no limit 00:07:43.591 00:07:43.591 Power Management 00:07:43.591 ================ 00:07:43.591 Number of Power States: 1 00:07:43.591 Current Power State: Power State #0 00:07:43.591 Power State #0: 00:07:43.591 Max Power: 25.00 W 00:07:43.591 Non-Operational State: Operational 00:07:43.591 Entry Latency: 16 microseconds 00:07:43.591 Exit Latency: 4 microseconds 00:07:43.591 Relative Read Throughput: 0 00:07:43.591 Relative Read Latency: 0 00:07:43.591 Relative Write Throughput: 0 00:07:43.592 Relative Write Latency: 0 00:07:43.592 Idle Power: Not Reported 00:07:43.592 Active Power: Not Reported 00:07:43.592 Non-Operational Permissive Mode: Not Supported 00:07:43.592 00:07:43.592 Health Information 00:07:43.592 ================== 00:07:43.592 Critical Warnings: 00:07:43.592 Available Spare Space: OK 00:07:43.592 Temperature: OK 00:07:43.592 Device Reliability: OK 00:07:43.592 Read Only: No 00:07:43.592 Volatile Memory Backup: OK 00:07:43.592 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.592 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.592 Available Spare: 0% 00:07:43.592 Available Spare Threshold: 0% 00:07:43.592 Life Percentage Used: 0% 00:07:43.592 Data Units Read: 863 00:07:43.592 Data Units Written: 792 00:07:43.592 Host Read Commands: 40962 00:07:43.592 Host Write Commands: 40385 00:07:43.592 Controller Busy Time: 0 minutes 00:07:43.592 Power Cycles: 0 00:07:43.592 Power On Hours: 0 hours 00:07:43.592 Unsafe Shutdowns: 0 00:07:43.592 Unrecoverable Media Errors: 0 00:07:43.592 Lifetime Error Log Entries: 0 00:07:43.592 Warning Temperature Time: 0 minutes 00:07:43.592 Critical Temperature Time: 0 minutes 00:07:43.592 00:07:43.592 Number of Queues 00:07:43.592 ================ 00:07:43.592 Number of I/O Submission Queues: 64 00:07:43.592 Number of I/O Completion Queues: 64 00:07:43.592 00:07:43.592 ZNS Specific Controller Data 00:07:43.592 ============================ 00:07:43.592 Zone Append Size Limit: 0 00:07:43.592 00:07:43.592 00:07:43.592 Active Namespaces 00:07:43.592 ================= 00:07:43.592 Namespace ID:1 00:07:43.592 Error Recovery Timeout: Unlimited 00:07:43.592 Command Set Identifier: NVM (00h) 00:07:43.592 Deallocate: Supported 00:07:43.592 Deallocated/Unwritten Error: Supported 00:07:43.592 Deallocated Read Value: All 0x00 00:07:43.592 Deallocate in Write Zeroes: Not Supported 00:07:43.592 Deallocated Guard Field: 0xFFFF 00:07:43.592 Flush: Supported 00:07:43.592 Reservation: Not Supported 00:07:43.592 Namespace Sharing Capabilities: Multiple Controllers 00:07:43.592 Size (in LBAs): 262144 (1GiB) 00:07:43.592 Capacity (in LBAs): 262144 (1GiB) 00:07:43.592 Utilization (in LBAs): 262144 (1GiB) 00:07:43.592 Thin Provisioning: Not Supported 00:07:43.592 Per-NS Atomic Units: No 00:07:43.592 Maximum Single Source Range Length: 128 00:07:43.592 Maximum Copy Length: 128 00:07:43.592 Maximum Source Range Count: 128 00:07:43.592 NGUID/EUI64 Never Reused: No 00:07:43.592 Namespace Write Protected: No 00:07:43.592 Endurance group ID: 1 00:07:43.592 Number of LBA Formats: 8 00:07:43.592 Current LBA Format: LBA Format #04 00:07:43.592 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.592 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.592 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.592 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.592 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.592 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.592 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.592 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.592 00:07:43.592 Get Feature FDP: 00:07:43.592 ================ 00:07:43.592 Enabled: Yes 00:07:43.592 FDP configuration index: 0 00:07:43.592 00:07:43.592 FDP configurations log page 00:07:43.592 =========================== 00:07:43.592 Number of FDP configurations: 1 00:07:43.592 Version: 0 00:07:43.592 Size: 112 00:07:43.592 FDP Configuration Descriptor: 0 00:07:43.592 Descriptor Size: 96 00:07:43.592 Reclaim Group Identifier format: 2 00:07:43.592 FDP Volatile Write Cache: Not Present 00:07:43.592 FDP Configuration: Valid 00:07:43.592 Vendor Specific Size: 0 00:07:43.592 Number of Reclaim Groups: 2 00:07:43.592 Number of Recalim Unit Handles: 8 00:07:43.592 Max Placement Identifiers: 128 00:07:43.592 Number of Namespaces Suppprted: 256 00:07:43.592 Reclaim unit Nominal Size: 6000000 bytes 00:07:43.592 Estimated Reclaim Unit Time Limit: Not Reported 00:07:43.592 RUH Desc #000: RUH Type: Initially Isolated 00:07:43.592 RUH Desc #001: RUH Type: Initially Isolated 00:07:43.592 RUH Desc #002: RUH Type: Initially Isolated 00:07:43.592 RUH Desc #003: RUH Type: Initially Isolated 00:07:43.592 RUH Desc #004: RUH Type: Initially Isolated 00:07:43.592 RUH Desc #005: RUH Type: Initially Isolated 00:07:43.592 RUH Desc #006: RUH Type: Initially Isolated 00:07:43.592 RUH Desc #007: RUH Type: Initially Isolated 00:07:43.592 00:07:43.592 FDP reclaim unit handle usage log page 00:07:43.592 ====================================== 00:07:43.592 Number of Reclaim Unit Handles: 8 00:07:43.592 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:43.592 RUH Usage Desc #001: RUH Attributes: Unused 00:07:43.592 RUH Usage Desc #002: RUH Attributes: Unused 00:07:43.592 RUH Usage Desc #003: RUH Attributes: Unused 00:07:43.592 RUH Usage Desc #004: RUH Attributes: Unused 00:07:43.592 RUH Usage Desc #005: RUH Attributes: Unused 00:07:43.592 RUH Usage Desc #006: RUH Attributes: Unused 00:07:43.592 RUH Usage Desc #007: RUH Attributes: Unused 00:07:43.592 00:07:43.592 FDP statistics log page 00:07:43.592 ======================= 00:07:43.592 Host bytes with metadata written: 509255680 00:07:43.592 Medi[2024-11-19 06:31:35.318051] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62874 terminated unexpected 00:07:43.592 a bytes with metadata written: 509313024 00:07:43.592 Media bytes erased: 0 00:07:43.592 00:07:43.592 FDP events log page 00:07:43.592 =================== 00:07:43.592 Number of FDP events: 0 00:07:43.592 00:07:43.592 NVM Specific Namespace Data 00:07:43.592 =========================== 00:07:43.592 Logical Block Storage Tag Mask: 0 00:07:43.592 Protection Information Capabilities: 00:07:43.592 16b Guard Protection Information Storage Tag Support: No 00:07:43.592 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.592 Storage Tag Check Read Support: No 00:07:43.592 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.592 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.592 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.592 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.592 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.592 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.592 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.592 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.592 ===================================================== 00:07:43.592 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:43.592 ===================================================== 00:07:43.592 Controller Capabilities/Features 00:07:43.592 ================================ 00:07:43.592 Vendor ID: 1b36 00:07:43.592 Subsystem Vendor ID: 1af4 00:07:43.592 Serial Number: 12342 00:07:43.592 Model Number: QEMU NVMe Ctrl 00:07:43.592 Firmware Version: 8.0.0 00:07:43.592 Recommended Arb Burst: 6 00:07:43.592 IEEE OUI Identifier: 00 54 52 00:07:43.592 Multi-path I/O 00:07:43.592 May have multiple subsystem ports: No 00:07:43.592 May have multiple controllers: No 00:07:43.592 Associated with SR-IOV VF: No 00:07:43.592 Max Data Transfer Size: 524288 00:07:43.592 Max Number of Namespaces: 256 00:07:43.592 Max Number of I/O Queues: 64 00:07:43.592 NVMe Specification Version (VS): 1.4 00:07:43.592 NVMe Specification Version (Identify): 1.4 00:07:43.592 Maximum Queue Entries: 2048 00:07:43.592 Contiguous Queues Required: Yes 00:07:43.592 Arbitration Mechanisms Supported 00:07:43.592 Weighted Round Robin: Not Supported 00:07:43.592 Vendor Specific: Not Supported 00:07:43.592 Reset Timeout: 7500 ms 00:07:43.592 Doorbell Stride: 4 bytes 00:07:43.593 NVM Subsystem Reset: Not Supported 00:07:43.593 Command Sets Supported 00:07:43.593 NVM Command Set: Supported 00:07:43.593 Boot Partition: Not Supported 00:07:43.593 Memory Page Size Minimum: 4096 bytes 00:07:43.593 Memory Page Size Maximum: 65536 bytes 00:07:43.593 Persistent Memory Region: Not Supported 00:07:43.593 Optional Asynchronous Events Supported 00:07:43.593 Namespace Attribute Notices: Supported 00:07:43.593 Firmware Activation Notices: Not Supported 00:07:43.593 ANA Change Notices: Not Supported 00:07:43.593 PLE Aggregate Log Change Notices: Not Supported 00:07:43.593 LBA Status Info Alert Notices: Not Supported 00:07:43.593 EGE Aggregate Log Change Notices: Not Supported 00:07:43.593 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.593 Zone Descriptor Change Notices: Not Supported 00:07:43.593 Discovery Log Change Notices: Not Supported 00:07:43.593 Controller Attributes 00:07:43.593 128-bit Host Identifier: Not Supported 00:07:43.593 Non-Operational Permissive Mode: Not Supported 00:07:43.593 NVM Sets: Not Supported 00:07:43.593 Read Recovery Levels: Not Supported 00:07:43.593 Endurance Groups: Not Supported 00:07:43.593 Predictable Latency Mode: Not Supported 00:07:43.593 Traffic Based Keep ALive: Not Supported 00:07:43.593 Namespace Granularity: Not Supported 00:07:43.593 SQ Associations: Not Supported 00:07:43.593 UUID List: Not Supported 00:07:43.593 Multi-Domain Subsystem: Not Supported 00:07:43.593 Fixed Capacity Management: Not Supported 00:07:43.593 Variable Capacity Management: Not Supported 00:07:43.593 Delete Endurance Group: Not Supported 00:07:43.593 Delete NVM Set: Not Supported 00:07:43.593 Extended LBA Formats Supported: Supported 00:07:43.593 Flexible Data Placement Supported: Not Supported 00:07:43.593 00:07:43.593 Controller Memory Buffer Support 00:07:43.593 ================================ 00:07:43.593 Supported: No 00:07:43.593 00:07:43.593 Persistent Memory Region Support 00:07:43.593 ================================ 00:07:43.593 Supported: No 00:07:43.593 00:07:43.593 Admin Command Set Attributes 00:07:43.593 ============================ 00:07:43.593 Security Send/Receive: Not Supported 00:07:43.593 Format NVM: Supported 00:07:43.593 Firmware Activate/Download: Not Supported 00:07:43.593 Namespace Management: Supported 00:07:43.593 Device Self-Test: Not Supported 00:07:43.593 Directives: Supported 00:07:43.593 NVMe-MI: Not Supported 00:07:43.593 Virtualization Management: Not Supported 00:07:43.593 Doorbell Buffer Config: Supported 00:07:43.593 Get LBA Status Capability: Not Supported 00:07:43.593 Command & Feature Lockdown Capability: Not Supported 00:07:43.593 Abort Command Limit: 4 00:07:43.593 Async Event Request Limit: 4 00:07:43.593 Number of Firmware Slots: N/A 00:07:43.593 Firmware Slot 1 Read-Only: N/A 00:07:43.593 Firmware Activation Without Reset: N/A 00:07:43.593 Multiple Update Detection Support: N/A 00:07:43.593 Firmware Update Granularity: No Information Provided 00:07:43.593 Per-Namespace SMART Log: Yes 00:07:43.593 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.593 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:43.593 Command Effects Log Page: Supported 00:07:43.593 Get Log Page Extended Data: Supported 00:07:43.593 Telemetry Log Pages: Not Supported 00:07:43.593 Persistent Event Log Pages: Not Supported 00:07:43.593 Supported Log Pages Log Page: May Support 00:07:43.593 Commands Supported & Effects Log Page: Not Supported 00:07:43.593 Feature Identifiers & Effects Log Page:May Support 00:07:43.593 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.593 Data Area 4 for Telemetry Log: Not Supported 00:07:43.593 Error Log Page Entries Supported: 1 00:07:43.593 Keep Alive: Not Supported 00:07:43.593 00:07:43.593 NVM Command Set Attributes 00:07:43.593 ========================== 00:07:43.593 Submission Queue Entry Size 00:07:43.593 Max: 64 00:07:43.593 Min: 64 00:07:43.593 Completion Queue Entry Size 00:07:43.593 Max: 16 00:07:43.593 Min: 16 00:07:43.593 Number of Namespaces: 256 00:07:43.593 Compare Command: Supported 00:07:43.593 Write Uncorrectable Command: Not Supported 00:07:43.593 Dataset Management Command: Supported 00:07:43.593 Write Zeroes Command: Supported 00:07:43.593 Set Features Save Field: Supported 00:07:43.593 Reservations: Not Supported 00:07:43.593 Timestamp: Supported 00:07:43.593 Copy: Supported 00:07:43.593 Volatile Write Cache: Present 00:07:43.593 Atomic Write Unit (Normal): 1 00:07:43.593 Atomic Write Unit (PFail): 1 00:07:43.593 Atomic Compare & Write Unit: 1 00:07:43.593 Fused Compare & Write: Not Supported 00:07:43.593 Scatter-Gather List 00:07:43.593 SGL Command Set: Supported 00:07:43.593 SGL Keyed: Not Supported 00:07:43.593 SGL Bit Bucket Descriptor: Not Supported 00:07:43.593 SGL Metadata Pointer: Not Supported 00:07:43.593 Oversized SGL: Not Supported 00:07:43.593 SGL Metadata Address: Not Supported 00:07:43.593 SGL Offset: Not Supported 00:07:43.593 Transport SGL Data Block: Not Supported 00:07:43.593 Replay Protected Memory Block: Not Supported 00:07:43.593 00:07:43.593 Firmware Slot Information 00:07:43.593 ========================= 00:07:43.593 Active slot: 1 00:07:43.593 Slot 1 Firmware Revision: 1.0 00:07:43.593 00:07:43.593 00:07:43.593 Commands Supported and Effects 00:07:43.593 ============================== 00:07:43.593 Admin Commands 00:07:43.593 -------------- 00:07:43.593 Delete I/O Submission Queue (00h): Supported 00:07:43.593 Create I/O Submission Queue (01h): Supported 00:07:43.593 Get Log Page (02h): Supported 00:07:43.593 Delete I/O Completion Queue (04h): Supported 00:07:43.593 Create I/O Completion Queue (05h): Supported 00:07:43.593 Identify (06h): Supported 00:07:43.593 Abort (08h): Supported 00:07:43.593 Set Features (09h): Supported 00:07:43.593 Get Features (0Ah): Supported 00:07:43.593 Asynchronous Event Request (0Ch): Supported 00:07:43.593 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.593 Directive Send (19h): Supported 00:07:43.593 Directive Receive (1Ah): Supported 00:07:43.593 Virtualization Management (1Ch): Supported 00:07:43.593 Doorbell Buffer Config (7Ch): Supported 00:07:43.593 Format NVM (80h): Supported LBA-Change 00:07:43.593 I/O Commands 00:07:43.593 ------------ 00:07:43.593 Flush (00h): Supported LBA-Change 00:07:43.593 Write (01h): Supported LBA-Change 00:07:43.593 Read (02h): Supported 00:07:43.593 Compare (05h): Supported 00:07:43.593 Write Zeroes (08h): Supported LBA-Change 00:07:43.593 Dataset Management (09h): Supported LBA-Change 00:07:43.593 Unknown (0Ch): Supported 00:07:43.593 Unknown (12h): Supported 00:07:43.593 Copy (19h): Supported LBA-Change 00:07:43.593 Unknown (1Dh): Supported LBA-Change 00:07:43.593 00:07:43.593 Error Log 00:07:43.593 ========= 00:07:43.593 00:07:43.593 Arbitration 00:07:43.593 =========== 00:07:43.593 Arbitration Burst: no limit 00:07:43.593 00:07:43.593 Power Management 00:07:43.593 ================ 00:07:43.593 Number of Power States: 1 00:07:43.593 Current Power State: Power State #0 00:07:43.593 Power State #0: 00:07:43.593 Max Power: 25.00 W 00:07:43.593 Non-Operational State: Operational 00:07:43.593 Entry Latency: 16 microseconds 00:07:43.593 Exit Latency: 4 microseconds 00:07:43.593 Relative Read Throughput: 0 00:07:43.593 Relative Read Latency: 0 00:07:43.593 Relative Write Throughput: 0 00:07:43.593 Relative Write Latency: 0 00:07:43.593 Idle Power: Not Reported 00:07:43.593 Active Power: Not Reported 00:07:43.593 Non-Operational Permissive Mode: Not Supported 00:07:43.593 00:07:43.593 Health Information 00:07:43.593 ================== 00:07:43.593 Critical Warnings: 00:07:43.593 Available Spare Space: OK 00:07:43.593 Temperature: OK 00:07:43.593 Device Reliability: OK 00:07:43.593 Read Only: No 00:07:43.593 Volatile Memory Backup: OK 00:07:43.593 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.594 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.594 Available Spare: 0% 00:07:43.594 Available Spare Threshold: 0% 00:07:43.594 Life Percentage Used: 0% 00:07:43.594 Data Units Read: 2307 00:07:43.594 Data Units Written: 2094 00:07:43.594 Host Read Commands: 120504 00:07:43.594 Host Write Commands: 118774 00:07:43.594 Controller Busy Time: 0 minutes 00:07:43.594 Power Cycles: 0 00:07:43.594 Power On Hours: 0 hours 00:07:43.594 Unsafe Shutdowns: 0 00:07:43.594 Unrecoverable Media Errors: 0 00:07:43.594 Lifetime Error Log Entries: 0 00:07:43.594 Warning Temperature Time: 0 minutes 00:07:43.594 Critical Temperature Time: 0 minutes 00:07:43.594 00:07:43.594 Number of Queues 00:07:43.594 ================ 00:07:43.594 Number of I/O Submission Queues: 64 00:07:43.594 Number of I/O Completion Queues: 64 00:07:43.594 00:07:43.594 ZNS Specific Controller Data 00:07:43.594 ============================ 00:07:43.594 Zone Append Size Limit: 0 00:07:43.594 00:07:43.594 00:07:43.594 Active Namespaces 00:07:43.594 ================= 00:07:43.594 Namespace ID:1 00:07:43.594 Error Recovery Timeout: Unlimited 00:07:43.594 Command Set Identifier: NVM (00h) 00:07:43.594 Deallocate: Supported 00:07:43.594 Deallocated/Unwritten Error: Supported 00:07:43.594 Deallocated Read Value: All 0x00 00:07:43.594 Deallocate in Write Zeroes: Not Supported 00:07:43.594 Deallocated Guard Field: 0xFFFF 00:07:43.594 Flush: Supported 00:07:43.594 Reservation: Not Supported 00:07:43.594 Namespace Sharing Capabilities: Private 00:07:43.594 Size (in LBAs): 1048576 (4GiB) 00:07:43.594 Capacity (in LBAs): 1048576 (4GiB) 00:07:43.594 Utilization (in LBAs): 1048576 (4GiB) 00:07:43.594 Thin Provisioning: Not Supported 00:07:43.594 Per-NS Atomic Units: No 00:07:43.594 Maximum Single Source Range Length: 128 00:07:43.594 Maximum Copy Length: 128 00:07:43.594 Maximum Source Range Count: 128 00:07:43.594 NGUID/EUI64 Never Reused: No 00:07:43.594 Namespace Write Protected: No 00:07:43.594 Number of LBA Formats: 8 00:07:43.594 Current LBA Format: LBA Format #04 00:07:43.594 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.594 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.594 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.594 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.594 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.594 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.594 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.594 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.594 00:07:43.594 NVM Specific Namespace Data 00:07:43.594 =========================== 00:07:43.594 Logical Block Storage Tag Mask: 0 00:07:43.594 Protection Information Capabilities: 00:07:43.594 16b Guard Protection Information Storage Tag Support: No 00:07:43.594 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.594 Storage Tag Check Read Support: No 00:07:43.594 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Namespace ID:2 00:07:43.594 Error Recovery Timeout: Unlimited 00:07:43.594 Command Set Identifier: NVM (00h) 00:07:43.594 Deallocate: Supported 00:07:43.594 Deallocated/Unwritten Error: Supported 00:07:43.594 Deallocated Read Value: All 0x00 00:07:43.594 Deallocate in Write Zeroes: Not Supported 00:07:43.594 Deallocated Guard Field: 0xFFFF 00:07:43.594 Flush: Supported 00:07:43.594 Reservation: Not Supported 00:07:43.594 Namespace Sharing Capabilities: Private 00:07:43.594 Size (in LBAs): 1048576 (4GiB) 00:07:43.594 Capacity (in LBAs): 1048576 (4GiB) 00:07:43.594 Utilization (in LBAs): 1048576 (4GiB) 00:07:43.594 Thin Provisioning: Not Supported 00:07:43.594 Per-NS Atomic Units: No 00:07:43.594 Maximum Single Source Range Length: 128 00:07:43.594 Maximum Copy Length: 128 00:07:43.594 Maximum Source Range Count: 128 00:07:43.594 NGUID/EUI64 Never Reused: No 00:07:43.594 Namespace Write Protected: No 00:07:43.594 Number of LBA Formats: 8 00:07:43.594 Current LBA Format: LBA Format #04 00:07:43.594 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.594 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.594 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.594 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.594 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.594 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.594 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.594 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.594 00:07:43.594 NVM Specific Namespace Data 00:07:43.594 =========================== 00:07:43.594 Logical Block Storage Tag Mask: 0 00:07:43.594 Protection Information Capabilities: 00:07:43.594 16b Guard Protection Information Storage Tag Support: No 00:07:43.594 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.594 Storage Tag Check Read Support: No 00:07:43.594 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Namespace ID:3 00:07:43.594 Error Recovery Timeout: Unlimited 00:07:43.594 Command Set Identifier: NVM (00h) 00:07:43.594 Deallocate: Supported 00:07:43.594 Deallocated/Unwritten Error: Supported 00:07:43.594 Deallocated Read Value: All 0x00 00:07:43.594 Deallocate in Write Zeroes: Not Supported 00:07:43.594 Deallocated Guard Field: 0xFFFF 00:07:43.594 Flush: Supported 00:07:43.594 Reservation: Not Supported 00:07:43.594 Namespace Sharing Capabilities: Private 00:07:43.594 Size (in LBAs): 1048576 (4GiB) 00:07:43.594 Capacity (in LBAs): 1048576 (4GiB) 00:07:43.594 Utilization (in LBAs): 1048576 (4GiB) 00:07:43.594 Thin Provisioning: Not Supported 00:07:43.594 Per-NS Atomic Units: No 00:07:43.594 Maximum Single Source Range Length: 128 00:07:43.594 Maximum Copy Length: 128 00:07:43.594 Maximum Source Range Count: 128 00:07:43.594 NGUID/EUI64 Never Reused: No 00:07:43.594 Namespace Write Protected: No 00:07:43.594 Number of LBA Formats: 8 00:07:43.594 Current LBA Format: LBA Format #04 00:07:43.594 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.594 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.594 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.594 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.594 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.594 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.594 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.594 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.594 00:07:43.594 NVM Specific Namespace Data 00:07:43.594 =========================== 00:07:43.594 Logical Block Storage Tag Mask: 0 00:07:43.594 Protection Information Capabilities: 00:07:43.594 16b Guard Protection Information Storage Tag Support: No 00:07:43.594 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.594 Storage Tag Check Read Support: No 00:07:43.594 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.594 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:43.594 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:43.852 ===================================================== 00:07:43.852 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:43.852 ===================================================== 00:07:43.852 Controller Capabilities/Features 00:07:43.852 ================================ 00:07:43.852 Vendor ID: 1b36 00:07:43.852 Subsystem Vendor ID: 1af4 00:07:43.852 Serial Number: 12340 00:07:43.852 Model Number: QEMU NVMe Ctrl 00:07:43.852 Firmware Version: 8.0.0 00:07:43.852 Recommended Arb Burst: 6 00:07:43.852 IEEE OUI Identifier: 00 54 52 00:07:43.852 Multi-path I/O 00:07:43.852 May have multiple subsystem ports: No 00:07:43.852 May have multiple controllers: No 00:07:43.852 Associated with SR-IOV VF: No 00:07:43.852 Max Data Transfer Size: 524288 00:07:43.852 Max Number of Namespaces: 256 00:07:43.852 Max Number of I/O Queues: 64 00:07:43.852 NVMe Specification Version (VS): 1.4 00:07:43.852 NVMe Specification Version (Identify): 1.4 00:07:43.852 Maximum Queue Entries: 2048 00:07:43.852 Contiguous Queues Required: Yes 00:07:43.852 Arbitration Mechanisms Supported 00:07:43.852 Weighted Round Robin: Not Supported 00:07:43.852 Vendor Specific: Not Supported 00:07:43.852 Reset Timeout: 7500 ms 00:07:43.852 Doorbell Stride: 4 bytes 00:07:43.852 NVM Subsystem Reset: Not Supported 00:07:43.852 Command Sets Supported 00:07:43.852 NVM Command Set: Supported 00:07:43.852 Boot Partition: Not Supported 00:07:43.852 Memory Page Size Minimum: 4096 bytes 00:07:43.852 Memory Page Size Maximum: 65536 bytes 00:07:43.852 Persistent Memory Region: Not Supported 00:07:43.852 Optional Asynchronous Events Supported 00:07:43.852 Namespace Attribute Notices: Supported 00:07:43.852 Firmware Activation Notices: Not Supported 00:07:43.852 ANA Change Notices: Not Supported 00:07:43.852 PLE Aggregate Log Change Notices: Not Supported 00:07:43.852 LBA Status Info Alert Notices: Not Supported 00:07:43.852 EGE Aggregate Log Change Notices: Not Supported 00:07:43.852 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.852 Zone Descriptor Change Notices: Not Supported 00:07:43.852 Discovery Log Change Notices: Not Supported 00:07:43.852 Controller Attributes 00:07:43.852 128-bit Host Identifier: Not Supported 00:07:43.852 Non-Operational Permissive Mode: Not Supported 00:07:43.852 NVM Sets: Not Supported 00:07:43.852 Read Recovery Levels: Not Supported 00:07:43.852 Endurance Groups: Not Supported 00:07:43.852 Predictable Latency Mode: Not Supported 00:07:43.852 Traffic Based Keep ALive: Not Supported 00:07:43.852 Namespace Granularity: Not Supported 00:07:43.852 SQ Associations: Not Supported 00:07:43.852 UUID List: Not Supported 00:07:43.852 Multi-Domain Subsystem: Not Supported 00:07:43.852 Fixed Capacity Management: Not Supported 00:07:43.852 Variable Capacity Management: Not Supported 00:07:43.852 Delete Endurance Group: Not Supported 00:07:43.852 Delete NVM Set: Not Supported 00:07:43.852 Extended LBA Formats Supported: Supported 00:07:43.852 Flexible Data Placement Supported: Not Supported 00:07:43.852 00:07:43.852 Controller Memory Buffer Support 00:07:43.852 ================================ 00:07:43.852 Supported: No 00:07:43.852 00:07:43.852 Persistent Memory Region Support 00:07:43.852 ================================ 00:07:43.852 Supported: No 00:07:43.852 00:07:43.852 Admin Command Set Attributes 00:07:43.852 ============================ 00:07:43.852 Security Send/Receive: Not Supported 00:07:43.852 Format NVM: Supported 00:07:43.852 Firmware Activate/Download: Not Supported 00:07:43.852 Namespace Management: Supported 00:07:43.852 Device Self-Test: Not Supported 00:07:43.852 Directives: Supported 00:07:43.852 NVMe-MI: Not Supported 00:07:43.852 Virtualization Management: Not Supported 00:07:43.852 Doorbell Buffer Config: Supported 00:07:43.852 Get LBA Status Capability: Not Supported 00:07:43.852 Command & Feature Lockdown Capability: Not Supported 00:07:43.852 Abort Command Limit: 4 00:07:43.852 Async Event Request Limit: 4 00:07:43.852 Number of Firmware Slots: N/A 00:07:43.852 Firmware Slot 1 Read-Only: N/A 00:07:43.852 Firmware Activation Without Reset: N/A 00:07:43.852 Multiple Update Detection Support: N/A 00:07:43.852 Firmware Update Granularity: No Information Provided 00:07:43.853 Per-Namespace SMART Log: Yes 00:07:43.853 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.853 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:43.853 Command Effects Log Page: Supported 00:07:43.853 Get Log Page Extended Data: Supported 00:07:43.853 Telemetry Log Pages: Not Supported 00:07:43.853 Persistent Event Log Pages: Not Supported 00:07:43.853 Supported Log Pages Log Page: May Support 00:07:43.853 Commands Supported & Effects Log Page: Not Supported 00:07:43.853 Feature Identifiers & Effects Log Page:May Support 00:07:43.853 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.853 Data Area 4 for Telemetry Log: Not Supported 00:07:43.853 Error Log Page Entries Supported: 1 00:07:43.853 Keep Alive: Not Supported 00:07:43.853 00:07:43.853 NVM Command Set Attributes 00:07:43.853 ========================== 00:07:43.853 Submission Queue Entry Size 00:07:43.853 Max: 64 00:07:43.853 Min: 64 00:07:43.853 Completion Queue Entry Size 00:07:43.853 Max: 16 00:07:43.853 Min: 16 00:07:43.853 Number of Namespaces: 256 00:07:43.853 Compare Command: Supported 00:07:43.853 Write Uncorrectable Command: Not Supported 00:07:43.853 Dataset Management Command: Supported 00:07:43.853 Write Zeroes Command: Supported 00:07:43.853 Set Features Save Field: Supported 00:07:43.853 Reservations: Not Supported 00:07:43.853 Timestamp: Supported 00:07:43.853 Copy: Supported 00:07:43.853 Volatile Write Cache: Present 00:07:43.853 Atomic Write Unit (Normal): 1 00:07:43.853 Atomic Write Unit (PFail): 1 00:07:43.853 Atomic Compare & Write Unit: 1 00:07:43.853 Fused Compare & Write: Not Supported 00:07:43.853 Scatter-Gather List 00:07:43.853 SGL Command Set: Supported 00:07:43.853 SGL Keyed: Not Supported 00:07:43.853 SGL Bit Bucket Descriptor: Not Supported 00:07:43.853 SGL Metadata Pointer: Not Supported 00:07:43.853 Oversized SGL: Not Supported 00:07:43.853 SGL Metadata Address: Not Supported 00:07:43.853 SGL Offset: Not Supported 00:07:43.853 Transport SGL Data Block: Not Supported 00:07:43.853 Replay Protected Memory Block: Not Supported 00:07:43.853 00:07:43.853 Firmware Slot Information 00:07:43.853 ========================= 00:07:43.853 Active slot: 1 00:07:43.853 Slot 1 Firmware Revision: 1.0 00:07:43.853 00:07:43.853 00:07:43.853 Commands Supported and Effects 00:07:43.853 ============================== 00:07:43.853 Admin Commands 00:07:43.853 -------------- 00:07:43.853 Delete I/O Submission Queue (00h): Supported 00:07:43.853 Create I/O Submission Queue (01h): Supported 00:07:43.853 Get Log Page (02h): Supported 00:07:43.853 Delete I/O Completion Queue (04h): Supported 00:07:43.853 Create I/O Completion Queue (05h): Supported 00:07:43.853 Identify (06h): Supported 00:07:43.853 Abort (08h): Supported 00:07:43.853 Set Features (09h): Supported 00:07:43.853 Get Features (0Ah): Supported 00:07:43.853 Asynchronous Event Request (0Ch): Supported 00:07:43.853 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.853 Directive Send (19h): Supported 00:07:43.853 Directive Receive (1Ah): Supported 00:07:43.853 Virtualization Management (1Ch): Supported 00:07:43.853 Doorbell Buffer Config (7Ch): Supported 00:07:43.853 Format NVM (80h): Supported LBA-Change 00:07:43.853 I/O Commands 00:07:43.853 ------------ 00:07:43.853 Flush (00h): Supported LBA-Change 00:07:43.853 Write (01h): Supported LBA-Change 00:07:43.853 Read (02h): Supported 00:07:43.853 Compare (05h): Supported 00:07:43.853 Write Zeroes (08h): Supported LBA-Change 00:07:43.853 Dataset Management (09h): Supported LBA-Change 00:07:43.853 Unknown (0Ch): Supported 00:07:43.853 Unknown (12h): Supported 00:07:43.853 Copy (19h): Supported LBA-Change 00:07:43.853 Unknown (1Dh): Supported LBA-Change 00:07:43.853 00:07:43.853 Error Log 00:07:43.853 ========= 00:07:43.853 00:07:43.853 Arbitration 00:07:43.853 =========== 00:07:43.853 Arbitration Burst: no limit 00:07:43.853 00:07:43.853 Power Management 00:07:43.853 ================ 00:07:43.853 Number of Power States: 1 00:07:43.853 Current Power State: Power State #0 00:07:43.853 Power State #0: 00:07:43.853 Max Power: 25.00 W 00:07:43.853 Non-Operational State: Operational 00:07:43.853 Entry Latency: 16 microseconds 00:07:43.853 Exit Latency: 4 microseconds 00:07:43.853 Relative Read Throughput: 0 00:07:43.853 Relative Read Latency: 0 00:07:43.853 Relative Write Throughput: 0 00:07:43.853 Relative Write Latency: 0 00:07:43.853 Idle Power: Not Reported 00:07:43.853 Active Power: Not Reported 00:07:43.853 Non-Operational Permissive Mode: Not Supported 00:07:43.853 00:07:43.853 Health Information 00:07:43.853 ================== 00:07:43.853 Critical Warnings: 00:07:43.853 Available Spare Space: OK 00:07:43.853 Temperature: OK 00:07:43.853 Device Reliability: OK 00:07:43.853 Read Only: No 00:07:43.853 Volatile Memory Backup: OK 00:07:43.853 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.853 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.853 Available Spare: 0% 00:07:43.853 Available Spare Threshold: 0% 00:07:43.853 Life Percentage Used: 0% 00:07:43.853 Data Units Read: 695 00:07:43.853 Data Units Written: 623 00:07:43.853 Host Read Commands: 39290 00:07:43.853 Host Write Commands: 39076 00:07:43.853 Controller Busy Time: 0 minutes 00:07:43.853 Power Cycles: 0 00:07:43.853 Power On Hours: 0 hours 00:07:43.853 Unsafe Shutdowns: 0 00:07:43.853 Unrecoverable Media Errors: 0 00:07:43.853 Lifetime Error Log Entries: 0 00:07:43.853 Warning Temperature Time: 0 minutes 00:07:43.853 Critical Temperature Time: 0 minutes 00:07:43.853 00:07:43.853 Number of Queues 00:07:43.853 ================ 00:07:43.853 Number of I/O Submission Queues: 64 00:07:43.853 Number of I/O Completion Queues: 64 00:07:43.853 00:07:43.853 ZNS Specific Controller Data 00:07:43.853 ============================ 00:07:43.853 Zone Append Size Limit: 0 00:07:43.853 00:07:43.853 00:07:43.853 Active Namespaces 00:07:43.853 ================= 00:07:43.853 Namespace ID:1 00:07:43.853 Error Recovery Timeout: Unlimited 00:07:43.853 Command Set Identifier: NVM (00h) 00:07:43.853 Deallocate: Supported 00:07:43.853 Deallocated/Unwritten Error: Supported 00:07:43.853 Deallocated Read Value: All 0x00 00:07:43.853 Deallocate in Write Zeroes: Not Supported 00:07:43.853 Deallocated Guard Field: 0xFFFF 00:07:43.853 Flush: Supported 00:07:43.853 Reservation: Not Supported 00:07:43.853 Metadata Transferred as: Separate Metadata Buffer 00:07:43.853 Namespace Sharing Capabilities: Private 00:07:43.853 Size (in LBAs): 1548666 (5GiB) 00:07:43.853 Capacity (in LBAs): 1548666 (5GiB) 00:07:43.853 Utilization (in LBAs): 1548666 (5GiB) 00:07:43.853 Thin Provisioning: Not Supported 00:07:43.853 Per-NS Atomic Units: No 00:07:43.853 Maximum Single Source Range Length: 128 00:07:43.853 Maximum Copy Length: 128 00:07:43.853 Maximum Source Range Count: 128 00:07:43.853 NGUID/EUI64 Never Reused: No 00:07:43.853 Namespace Write Protected: No 00:07:43.853 Number of LBA Formats: 8 00:07:43.853 Current LBA Format: LBA Format #07 00:07:43.853 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.853 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.853 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.853 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.853 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.854 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.854 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.854 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.854 00:07:43.854 NVM Specific Namespace Data 00:07:43.854 =========================== 00:07:43.854 Logical Block Storage Tag Mask: 0 00:07:43.854 Protection Information Capabilities: 00:07:43.854 16b Guard Protection Information Storage Tag Support: No 00:07:43.854 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.854 Storage Tag Check Read Support: No 00:07:43.854 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.854 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.854 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.854 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.854 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.854 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.854 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.854 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.854 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:43.854 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:44.179 ===================================================== 00:07:44.179 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:44.180 ===================================================== 00:07:44.180 Controller Capabilities/Features 00:07:44.180 ================================ 00:07:44.180 Vendor ID: 1b36 00:07:44.180 Subsystem Vendor ID: 1af4 00:07:44.180 Serial Number: 12341 00:07:44.180 Model Number: QEMU NVMe Ctrl 00:07:44.180 Firmware Version: 8.0.0 00:07:44.180 Recommended Arb Burst: 6 00:07:44.180 IEEE OUI Identifier: 00 54 52 00:07:44.180 Multi-path I/O 00:07:44.180 May have multiple subsystem ports: No 00:07:44.180 May have multiple controllers: No 00:07:44.180 Associated with SR-IOV VF: No 00:07:44.180 Max Data Transfer Size: 524288 00:07:44.180 Max Number of Namespaces: 256 00:07:44.180 Max Number of I/O Queues: 64 00:07:44.180 NVMe Specification Version (VS): 1.4 00:07:44.180 NVMe Specification Version (Identify): 1.4 00:07:44.180 Maximum Queue Entries: 2048 00:07:44.180 Contiguous Queues Required: Yes 00:07:44.180 Arbitration Mechanisms Supported 00:07:44.180 Weighted Round Robin: Not Supported 00:07:44.180 Vendor Specific: Not Supported 00:07:44.180 Reset Timeout: 7500 ms 00:07:44.180 Doorbell Stride: 4 bytes 00:07:44.180 NVM Subsystem Reset: Not Supported 00:07:44.180 Command Sets Supported 00:07:44.180 NVM Command Set: Supported 00:07:44.180 Boot Partition: Not Supported 00:07:44.180 Memory Page Size Minimum: 4096 bytes 00:07:44.180 Memory Page Size Maximum: 65536 bytes 00:07:44.180 Persistent Memory Region: Not Supported 00:07:44.180 Optional Asynchronous Events Supported 00:07:44.180 Namespace Attribute Notices: Supported 00:07:44.180 Firmware Activation Notices: Not Supported 00:07:44.180 ANA Change Notices: Not Supported 00:07:44.180 PLE Aggregate Log Change Notices: Not Supported 00:07:44.180 LBA Status Info Alert Notices: Not Supported 00:07:44.180 EGE Aggregate Log Change Notices: Not Supported 00:07:44.180 Normal NVM Subsystem Shutdown event: Not Supported 00:07:44.180 Zone Descriptor Change Notices: Not Supported 00:07:44.180 Discovery Log Change Notices: Not Supported 00:07:44.180 Controller Attributes 00:07:44.180 128-bit Host Identifier: Not Supported 00:07:44.180 Non-Operational Permissive Mode: Not Supported 00:07:44.180 NVM Sets: Not Supported 00:07:44.180 Read Recovery Levels: Not Supported 00:07:44.180 Endurance Groups: Not Supported 00:07:44.180 Predictable Latency Mode: Not Supported 00:07:44.180 Traffic Based Keep ALive: Not Supported 00:07:44.180 Namespace Granularity: Not Supported 00:07:44.180 SQ Associations: Not Supported 00:07:44.180 UUID List: Not Supported 00:07:44.180 Multi-Domain Subsystem: Not Supported 00:07:44.180 Fixed Capacity Management: Not Supported 00:07:44.180 Variable Capacity Management: Not Supported 00:07:44.180 Delete Endurance Group: Not Supported 00:07:44.180 Delete NVM Set: Not Supported 00:07:44.180 Extended LBA Formats Supported: Supported 00:07:44.180 Flexible Data Placement Supported: Not Supported 00:07:44.180 00:07:44.180 Controller Memory Buffer Support 00:07:44.180 ================================ 00:07:44.180 Supported: No 00:07:44.180 00:07:44.180 Persistent Memory Region Support 00:07:44.180 ================================ 00:07:44.180 Supported: No 00:07:44.180 00:07:44.180 Admin Command Set Attributes 00:07:44.180 ============================ 00:07:44.180 Security Send/Receive: Not Supported 00:07:44.180 Format NVM: Supported 00:07:44.180 Firmware Activate/Download: Not Supported 00:07:44.180 Namespace Management: Supported 00:07:44.180 Device Self-Test: Not Supported 00:07:44.180 Directives: Supported 00:07:44.180 NVMe-MI: Not Supported 00:07:44.180 Virtualization Management: Not Supported 00:07:44.180 Doorbell Buffer Config: Supported 00:07:44.180 Get LBA Status Capability: Not Supported 00:07:44.180 Command & Feature Lockdown Capability: Not Supported 00:07:44.180 Abort Command Limit: 4 00:07:44.180 Async Event Request Limit: 4 00:07:44.180 Number of Firmware Slots: N/A 00:07:44.180 Firmware Slot 1 Read-Only: N/A 00:07:44.180 Firmware Activation Without Reset: N/A 00:07:44.180 Multiple Update Detection Support: N/A 00:07:44.180 Firmware Update Granularity: No Information Provided 00:07:44.180 Per-Namespace SMART Log: Yes 00:07:44.180 Asymmetric Namespace Access Log Page: Not Supported 00:07:44.180 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:44.180 Command Effects Log Page: Supported 00:07:44.180 Get Log Page Extended Data: Supported 00:07:44.180 Telemetry Log Pages: Not Supported 00:07:44.180 Persistent Event Log Pages: Not Supported 00:07:44.180 Supported Log Pages Log Page: May Support 00:07:44.180 Commands Supported & Effects Log Page: Not Supported 00:07:44.180 Feature Identifiers & Effects Log Page:May Support 00:07:44.180 NVMe-MI Commands & Effects Log Page: May Support 00:07:44.180 Data Area 4 for Telemetry Log: Not Supported 00:07:44.180 Error Log Page Entries Supported: 1 00:07:44.180 Keep Alive: Not Supported 00:07:44.180 00:07:44.180 NVM Command Set Attributes 00:07:44.180 ========================== 00:07:44.180 Submission Queue Entry Size 00:07:44.180 Max: 64 00:07:44.180 Min: 64 00:07:44.180 Completion Queue Entry Size 00:07:44.180 Max: 16 00:07:44.180 Min: 16 00:07:44.180 Number of Namespaces: 256 00:07:44.180 Compare Command: Supported 00:07:44.180 Write Uncorrectable Command: Not Supported 00:07:44.180 Dataset Management Command: Supported 00:07:44.180 Write Zeroes Command: Supported 00:07:44.180 Set Features Save Field: Supported 00:07:44.180 Reservations: Not Supported 00:07:44.180 Timestamp: Supported 00:07:44.180 Copy: Supported 00:07:44.180 Volatile Write Cache: Present 00:07:44.180 Atomic Write Unit (Normal): 1 00:07:44.180 Atomic Write Unit (PFail): 1 00:07:44.180 Atomic Compare & Write Unit: 1 00:07:44.180 Fused Compare & Write: Not Supported 00:07:44.180 Scatter-Gather List 00:07:44.180 SGL Command Set: Supported 00:07:44.180 SGL Keyed: Not Supported 00:07:44.180 SGL Bit Bucket Descriptor: Not Supported 00:07:44.180 SGL Metadata Pointer: Not Supported 00:07:44.180 Oversized SGL: Not Supported 00:07:44.180 SGL Metadata Address: Not Supported 00:07:44.180 SGL Offset: Not Supported 00:07:44.180 Transport SGL Data Block: Not Supported 00:07:44.180 Replay Protected Memory Block: Not Supported 00:07:44.180 00:07:44.180 Firmware Slot Information 00:07:44.180 ========================= 00:07:44.180 Active slot: 1 00:07:44.180 Slot 1 Firmware Revision: 1.0 00:07:44.180 00:07:44.180 00:07:44.180 Commands Supported and Effects 00:07:44.180 ============================== 00:07:44.180 Admin Commands 00:07:44.180 -------------- 00:07:44.180 Delete I/O Submission Queue (00h): Supported 00:07:44.180 Create I/O Submission Queue (01h): Supported 00:07:44.180 Get Log Page (02h): Supported 00:07:44.180 Delete I/O Completion Queue (04h): Supported 00:07:44.180 Create I/O Completion Queue (05h): Supported 00:07:44.180 Identify (06h): Supported 00:07:44.180 Abort (08h): Supported 00:07:44.180 Set Features (09h): Supported 00:07:44.180 Get Features (0Ah): Supported 00:07:44.180 Asynchronous Event Request (0Ch): Supported 00:07:44.180 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:44.180 Directive Send (19h): Supported 00:07:44.180 Directive Receive (1Ah): Supported 00:07:44.180 Virtualization Management (1Ch): Supported 00:07:44.180 Doorbell Buffer Config (7Ch): Supported 00:07:44.180 Format NVM (80h): Supported LBA-Change 00:07:44.180 I/O Commands 00:07:44.180 ------------ 00:07:44.180 Flush (00h): Supported LBA-Change 00:07:44.180 Write (01h): Supported LBA-Change 00:07:44.180 Read (02h): Supported 00:07:44.180 Compare (05h): Supported 00:07:44.180 Write Zeroes (08h): Supported LBA-Change 00:07:44.180 Dataset Management (09h): Supported LBA-Change 00:07:44.180 Unknown (0Ch): Supported 00:07:44.180 Unknown (12h): Supported 00:07:44.180 Copy (19h): Supported LBA-Change 00:07:44.180 Unknown (1Dh): Supported LBA-Change 00:07:44.180 00:07:44.180 Error Log 00:07:44.180 ========= 00:07:44.181 00:07:44.181 Arbitration 00:07:44.181 =========== 00:07:44.181 Arbitration Burst: no limit 00:07:44.181 00:07:44.181 Power Management 00:07:44.181 ================ 00:07:44.181 Number of Power States: 1 00:07:44.181 Current Power State: Power State #0 00:07:44.181 Power State #0: 00:07:44.181 Max Power: 25.00 W 00:07:44.181 Non-Operational State: Operational 00:07:44.181 Entry Latency: 16 microseconds 00:07:44.181 Exit Latency: 4 microseconds 00:07:44.181 Relative Read Throughput: 0 00:07:44.181 Relative Read Latency: 0 00:07:44.181 Relative Write Throughput: 0 00:07:44.181 Relative Write Latency: 0 00:07:44.181 Idle Power: Not Reported 00:07:44.181 Active Power: Not Reported 00:07:44.181 Non-Operational Permissive Mode: Not Supported 00:07:44.181 00:07:44.181 Health Information 00:07:44.181 ================== 00:07:44.181 Critical Warnings: 00:07:44.181 Available Spare Space: OK 00:07:44.181 Temperature: OK 00:07:44.181 Device Reliability: OK 00:07:44.181 Read Only: No 00:07:44.181 Volatile Memory Backup: OK 00:07:44.181 Current Temperature: 323 Kelvin (50 Celsius) 00:07:44.181 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:44.181 Available Spare: 0% 00:07:44.181 Available Spare Threshold: 0% 00:07:44.181 Life Percentage Used: 0% 00:07:44.181 Data Units Read: 1109 00:07:44.181 Data Units Written: 982 00:07:44.181 Host Read Commands: 59139 00:07:44.181 Host Write Commands: 58025 00:07:44.181 Controller Busy Time: 0 minutes 00:07:44.181 Power Cycles: 0 00:07:44.181 Power On Hours: 0 hours 00:07:44.181 Unsafe Shutdowns: 0 00:07:44.181 Unrecoverable Media Errors: 0 00:07:44.181 Lifetime Error Log Entries: 0 00:07:44.181 Warning Temperature Time: 0 minutes 00:07:44.181 Critical Temperature Time: 0 minutes 00:07:44.181 00:07:44.181 Number of Queues 00:07:44.181 ================ 00:07:44.181 Number of I/O Submission Queues: 64 00:07:44.181 Number of I/O Completion Queues: 64 00:07:44.181 00:07:44.181 ZNS Specific Controller Data 00:07:44.181 ============================ 00:07:44.181 Zone Append Size Limit: 0 00:07:44.181 00:07:44.181 00:07:44.181 Active Namespaces 00:07:44.181 ================= 00:07:44.181 Namespace ID:1 00:07:44.181 Error Recovery Timeout: Unlimited 00:07:44.181 Command Set Identifier: NVM (00h) 00:07:44.181 Deallocate: Supported 00:07:44.181 Deallocated/Unwritten Error: Supported 00:07:44.181 Deallocated Read Value: All 0x00 00:07:44.181 Deallocate in Write Zeroes: Not Supported 00:07:44.181 Deallocated Guard Field: 0xFFFF 00:07:44.181 Flush: Supported 00:07:44.181 Reservation: Not Supported 00:07:44.181 Namespace Sharing Capabilities: Private 00:07:44.181 Size (in LBAs): 1310720 (5GiB) 00:07:44.181 Capacity (in LBAs): 1310720 (5GiB) 00:07:44.181 Utilization (in LBAs): 1310720 (5GiB) 00:07:44.181 Thin Provisioning: Not Supported 00:07:44.181 Per-NS Atomic Units: No 00:07:44.181 Maximum Single Source Range Length: 128 00:07:44.181 Maximum Copy Length: 128 00:07:44.181 Maximum Source Range Count: 128 00:07:44.181 NGUID/EUI64 Never Reused: No 00:07:44.181 Namespace Write Protected: No 00:07:44.181 Number of LBA Formats: 8 00:07:44.181 Current LBA Format: LBA Format #04 00:07:44.181 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.181 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.181 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.181 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.181 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.181 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.181 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.181 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.181 00:07:44.181 NVM Specific Namespace Data 00:07:44.181 =========================== 00:07:44.181 Logical Block Storage Tag Mask: 0 00:07:44.181 Protection Information Capabilities: 00:07:44.181 16b Guard Protection Information Storage Tag Support: No 00:07:44.181 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.181 Storage Tag Check Read Support: No 00:07:44.181 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.181 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.181 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.181 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.181 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.181 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.181 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.181 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.181 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:44.181 06:31:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:44.440 ===================================================== 00:07:44.440 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:44.440 ===================================================== 00:07:44.440 Controller Capabilities/Features 00:07:44.440 ================================ 00:07:44.440 Vendor ID: 1b36 00:07:44.440 Subsystem Vendor ID: 1af4 00:07:44.440 Serial Number: 12342 00:07:44.440 Model Number: QEMU NVMe Ctrl 00:07:44.441 Firmware Version: 8.0.0 00:07:44.441 Recommended Arb Burst: 6 00:07:44.441 IEEE OUI Identifier: 00 54 52 00:07:44.441 Multi-path I/O 00:07:44.441 May have multiple subsystem ports: No 00:07:44.441 May have multiple controllers: No 00:07:44.441 Associated with SR-IOV VF: No 00:07:44.441 Max Data Transfer Size: 524288 00:07:44.441 Max Number of Namespaces: 256 00:07:44.441 Max Number of I/O Queues: 64 00:07:44.441 NVMe Specification Version (VS): 1.4 00:07:44.441 NVMe Specification Version (Identify): 1.4 00:07:44.441 Maximum Queue Entries: 2048 00:07:44.441 Contiguous Queues Required: Yes 00:07:44.441 Arbitration Mechanisms Supported 00:07:44.441 Weighted Round Robin: Not Supported 00:07:44.441 Vendor Specific: Not Supported 00:07:44.441 Reset Timeout: 7500 ms 00:07:44.441 Doorbell Stride: 4 bytes 00:07:44.441 NVM Subsystem Reset: Not Supported 00:07:44.441 Command Sets Supported 00:07:44.441 NVM Command Set: Supported 00:07:44.441 Boot Partition: Not Supported 00:07:44.441 Memory Page Size Minimum: 4096 bytes 00:07:44.441 Memory Page Size Maximum: 65536 bytes 00:07:44.441 Persistent Memory Region: Not Supported 00:07:44.441 Optional Asynchronous Events Supported 00:07:44.441 Namespace Attribute Notices: Supported 00:07:44.441 Firmware Activation Notices: Not Supported 00:07:44.441 ANA Change Notices: Not Supported 00:07:44.441 PLE Aggregate Log Change Notices: Not Supported 00:07:44.441 LBA Status Info Alert Notices: Not Supported 00:07:44.441 EGE Aggregate Log Change Notices: Not Supported 00:07:44.441 Normal NVM Subsystem Shutdown event: Not Supported 00:07:44.441 Zone Descriptor Change Notices: Not Supported 00:07:44.441 Discovery Log Change Notices: Not Supported 00:07:44.441 Controller Attributes 00:07:44.441 128-bit Host Identifier: Not Supported 00:07:44.441 Non-Operational Permissive Mode: Not Supported 00:07:44.441 NVM Sets: Not Supported 00:07:44.441 Read Recovery Levels: Not Supported 00:07:44.441 Endurance Groups: Not Supported 00:07:44.441 Predictable Latency Mode: Not Supported 00:07:44.441 Traffic Based Keep ALive: Not Supported 00:07:44.441 Namespace Granularity: Not Supported 00:07:44.441 SQ Associations: Not Supported 00:07:44.441 UUID List: Not Supported 00:07:44.441 Multi-Domain Subsystem: Not Supported 00:07:44.441 Fixed Capacity Management: Not Supported 00:07:44.441 Variable Capacity Management: Not Supported 00:07:44.441 Delete Endurance Group: Not Supported 00:07:44.441 Delete NVM Set: Not Supported 00:07:44.441 Extended LBA Formats Supported: Supported 00:07:44.441 Flexible Data Placement Supported: Not Supported 00:07:44.441 00:07:44.441 Controller Memory Buffer Support 00:07:44.441 ================================ 00:07:44.441 Supported: No 00:07:44.441 00:07:44.441 Persistent Memory Region Support 00:07:44.441 ================================ 00:07:44.441 Supported: No 00:07:44.441 00:07:44.441 Admin Command Set Attributes 00:07:44.441 ============================ 00:07:44.441 Security Send/Receive: Not Supported 00:07:44.441 Format NVM: Supported 00:07:44.441 Firmware Activate/Download: Not Supported 00:07:44.441 Namespace Management: Supported 00:07:44.441 Device Self-Test: Not Supported 00:07:44.441 Directives: Supported 00:07:44.441 NVMe-MI: Not Supported 00:07:44.441 Virtualization Management: Not Supported 00:07:44.441 Doorbell Buffer Config: Supported 00:07:44.441 Get LBA Status Capability: Not Supported 00:07:44.441 Command & Feature Lockdown Capability: Not Supported 00:07:44.441 Abort Command Limit: 4 00:07:44.441 Async Event Request Limit: 4 00:07:44.441 Number of Firmware Slots: N/A 00:07:44.441 Firmware Slot 1 Read-Only: N/A 00:07:44.441 Firmware Activation Without Reset: N/A 00:07:44.441 Multiple Update Detection Support: N/A 00:07:44.441 Firmware Update Granularity: No Information Provided 00:07:44.441 Per-Namespace SMART Log: Yes 00:07:44.441 Asymmetric Namespace Access Log Page: Not Supported 00:07:44.441 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:44.441 Command Effects Log Page: Supported 00:07:44.441 Get Log Page Extended Data: Supported 00:07:44.441 Telemetry Log Pages: Not Supported 00:07:44.441 Persistent Event Log Pages: Not Supported 00:07:44.441 Supported Log Pages Log Page: May Support 00:07:44.441 Commands Supported & Effects Log Page: Not Supported 00:07:44.441 Feature Identifiers & Effects Log Page:May Support 00:07:44.441 NVMe-MI Commands & Effects Log Page: May Support 00:07:44.441 Data Area 4 for Telemetry Log: Not Supported 00:07:44.441 Error Log Page Entries Supported: 1 00:07:44.441 Keep Alive: Not Supported 00:07:44.441 00:07:44.441 NVM Command Set Attributes 00:07:44.441 ========================== 00:07:44.441 Submission Queue Entry Size 00:07:44.441 Max: 64 00:07:44.441 Min: 64 00:07:44.441 Completion Queue Entry Size 00:07:44.441 Max: 16 00:07:44.441 Min: 16 00:07:44.441 Number of Namespaces: 256 00:07:44.441 Compare Command: Supported 00:07:44.441 Write Uncorrectable Command: Not Supported 00:07:44.441 Dataset Management Command: Supported 00:07:44.441 Write Zeroes Command: Supported 00:07:44.441 Set Features Save Field: Supported 00:07:44.441 Reservations: Not Supported 00:07:44.441 Timestamp: Supported 00:07:44.441 Copy: Supported 00:07:44.441 Volatile Write Cache: Present 00:07:44.441 Atomic Write Unit (Normal): 1 00:07:44.441 Atomic Write Unit (PFail): 1 00:07:44.441 Atomic Compare & Write Unit: 1 00:07:44.441 Fused Compare & Write: Not Supported 00:07:44.441 Scatter-Gather List 00:07:44.441 SGL Command Set: Supported 00:07:44.441 SGL Keyed: Not Supported 00:07:44.441 SGL Bit Bucket Descriptor: Not Supported 00:07:44.441 SGL Metadata Pointer: Not Supported 00:07:44.441 Oversized SGL: Not Supported 00:07:44.441 SGL Metadata Address: Not Supported 00:07:44.441 SGL Offset: Not Supported 00:07:44.441 Transport SGL Data Block: Not Supported 00:07:44.441 Replay Protected Memory Block: Not Supported 00:07:44.441 00:07:44.441 Firmware Slot Information 00:07:44.441 ========================= 00:07:44.441 Active slot: 1 00:07:44.441 Slot 1 Firmware Revision: 1.0 00:07:44.441 00:07:44.441 00:07:44.441 Commands Supported and Effects 00:07:44.441 ============================== 00:07:44.441 Admin Commands 00:07:44.441 -------------- 00:07:44.441 Delete I/O Submission Queue (00h): Supported 00:07:44.441 Create I/O Submission Queue (01h): Supported 00:07:44.441 Get Log Page (02h): Supported 00:07:44.441 Delete I/O Completion Queue (04h): Supported 00:07:44.441 Create I/O Completion Queue (05h): Supported 00:07:44.441 Identify (06h): Supported 00:07:44.441 Abort (08h): Supported 00:07:44.441 Set Features (09h): Supported 00:07:44.441 Get Features (0Ah): Supported 00:07:44.441 Asynchronous Event Request (0Ch): Supported 00:07:44.441 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:44.441 Directive Send (19h): Supported 00:07:44.441 Directive Receive (1Ah): Supported 00:07:44.441 Virtualization Management (1Ch): Supported 00:07:44.441 Doorbell Buffer Config (7Ch): Supported 00:07:44.441 Format NVM (80h): Supported LBA-Change 00:07:44.441 I/O Commands 00:07:44.441 ------------ 00:07:44.441 Flush (00h): Supported LBA-Change 00:07:44.441 Write (01h): Supported LBA-Change 00:07:44.441 Read (02h): Supported 00:07:44.441 Compare (05h): Supported 00:07:44.441 Write Zeroes (08h): Supported LBA-Change 00:07:44.441 Dataset Management (09h): Supported LBA-Change 00:07:44.441 Unknown (0Ch): Supported 00:07:44.441 Unknown (12h): Supported 00:07:44.441 Copy (19h): Supported LBA-Change 00:07:44.441 Unknown (1Dh): Supported LBA-Change 00:07:44.441 00:07:44.441 Error Log 00:07:44.441 ========= 00:07:44.441 00:07:44.441 Arbitration 00:07:44.441 =========== 00:07:44.441 Arbitration Burst: no limit 00:07:44.441 00:07:44.441 Power Management 00:07:44.441 ================ 00:07:44.441 Number of Power States: 1 00:07:44.441 Current Power State: Power State #0 00:07:44.441 Power State #0: 00:07:44.441 Max Power: 25.00 W 00:07:44.441 Non-Operational State: Operational 00:07:44.441 Entry Latency: 16 microseconds 00:07:44.441 Exit Latency: 4 microseconds 00:07:44.441 Relative Read Throughput: 0 00:07:44.441 Relative Read Latency: 0 00:07:44.441 Relative Write Throughput: 0 00:07:44.441 Relative Write Latency: 0 00:07:44.441 Idle Power: Not Reported 00:07:44.442 Active Power: Not Reported 00:07:44.442 Non-Operational Permissive Mode: Not Supported 00:07:44.442 00:07:44.442 Health Information 00:07:44.442 ================== 00:07:44.442 Critical Warnings: 00:07:44.442 Available Spare Space: OK 00:07:44.442 Temperature: OK 00:07:44.442 Device Reliability: OK 00:07:44.442 Read Only: No 00:07:44.442 Volatile Memory Backup: OK 00:07:44.442 Current Temperature: 323 Kelvin (50 Celsius) 00:07:44.442 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:44.442 Available Spare: 0% 00:07:44.442 Available Spare Threshold: 0% 00:07:44.442 Life Percentage Used: 0% 00:07:44.442 Data Units Read: 2307 00:07:44.442 Data Units Written: 2094 00:07:44.442 Host Read Commands: 120504 00:07:44.442 Host Write Commands: 118774 00:07:44.442 Controller Busy Time: 0 minutes 00:07:44.442 Power Cycles: 0 00:07:44.442 Power On Hours: 0 hours 00:07:44.442 Unsafe Shutdowns: 0 00:07:44.442 Unrecoverable Media Errors: 0 00:07:44.442 Lifetime Error Log Entries: 0 00:07:44.442 Warning Temperature Time: 0 minutes 00:07:44.442 Critical Temperature Time: 0 minutes 00:07:44.442 00:07:44.442 Number of Queues 00:07:44.442 ================ 00:07:44.442 Number of I/O Submission Queues: 64 00:07:44.442 Number of I/O Completion Queues: 64 00:07:44.442 00:07:44.442 ZNS Specific Controller Data 00:07:44.442 ============================ 00:07:44.442 Zone Append Size Limit: 0 00:07:44.442 00:07:44.442 00:07:44.442 Active Namespaces 00:07:44.442 ================= 00:07:44.442 Namespace ID:1 00:07:44.442 Error Recovery Timeout: Unlimited 00:07:44.442 Command Set Identifier: NVM (00h) 00:07:44.442 Deallocate: Supported 00:07:44.442 Deallocated/Unwritten Error: Supported 00:07:44.442 Deallocated Read Value: All 0x00 00:07:44.442 Deallocate in Write Zeroes: Not Supported 00:07:44.442 Deallocated Guard Field: 0xFFFF 00:07:44.442 Flush: Supported 00:07:44.442 Reservation: Not Supported 00:07:44.442 Namespace Sharing Capabilities: Private 00:07:44.442 Size (in LBAs): 1048576 (4GiB) 00:07:44.442 Capacity (in LBAs): 1048576 (4GiB) 00:07:44.442 Utilization (in LBAs): 1048576 (4GiB) 00:07:44.442 Thin Provisioning: Not Supported 00:07:44.442 Per-NS Atomic Units: No 00:07:44.442 Maximum Single Source Range Length: 128 00:07:44.442 Maximum Copy Length: 128 00:07:44.442 Maximum Source Range Count: 128 00:07:44.442 NGUID/EUI64 Never Reused: No 00:07:44.442 Namespace Write Protected: No 00:07:44.442 Number of LBA Formats: 8 00:07:44.442 Current LBA Format: LBA Format #04 00:07:44.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.442 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.442 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.442 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.442 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.442 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.442 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.442 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.442 00:07:44.442 NVM Specific Namespace Data 00:07:44.442 =========================== 00:07:44.442 Logical Block Storage Tag Mask: 0 00:07:44.442 Protection Information Capabilities: 00:07:44.442 16b Guard Protection Information Storage Tag Support: No 00:07:44.442 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.442 Storage Tag Check Read Support: No 00:07:44.442 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Namespace ID:2 00:07:44.442 Error Recovery Timeout: Unlimited 00:07:44.442 Command Set Identifier: NVM (00h) 00:07:44.442 Deallocate: Supported 00:07:44.442 Deallocated/Unwritten Error: Supported 00:07:44.442 Deallocated Read Value: All 0x00 00:07:44.442 Deallocate in Write Zeroes: Not Supported 00:07:44.442 Deallocated Guard Field: 0xFFFF 00:07:44.442 Flush: Supported 00:07:44.442 Reservation: Not Supported 00:07:44.442 Namespace Sharing Capabilities: Private 00:07:44.442 Size (in LBAs): 1048576 (4GiB) 00:07:44.442 Capacity (in LBAs): 1048576 (4GiB) 00:07:44.442 Utilization (in LBAs): 1048576 (4GiB) 00:07:44.442 Thin Provisioning: Not Supported 00:07:44.442 Per-NS Atomic Units: No 00:07:44.442 Maximum Single Source Range Length: 128 00:07:44.442 Maximum Copy Length: 128 00:07:44.442 Maximum Source Range Count: 128 00:07:44.442 NGUID/EUI64 Never Reused: No 00:07:44.442 Namespace Write Protected: No 00:07:44.442 Number of LBA Formats: 8 00:07:44.442 Current LBA Format: LBA Format #04 00:07:44.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.442 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.442 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.442 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.442 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.442 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.442 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.442 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.442 00:07:44.442 NVM Specific Namespace Data 00:07:44.442 =========================== 00:07:44.442 Logical Block Storage Tag Mask: 0 00:07:44.442 Protection Information Capabilities: 00:07:44.442 16b Guard Protection Information Storage Tag Support: No 00:07:44.442 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.442 Storage Tag Check Read Support: No 00:07:44.442 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Namespace ID:3 00:07:44.442 Error Recovery Timeout: Unlimited 00:07:44.442 Command Set Identifier: NVM (00h) 00:07:44.442 Deallocate: Supported 00:07:44.442 Deallocated/Unwritten Error: Supported 00:07:44.442 Deallocated Read Value: All 0x00 00:07:44.442 Deallocate in Write Zeroes: Not Supported 00:07:44.442 Deallocated Guard Field: 0xFFFF 00:07:44.442 Flush: Supported 00:07:44.442 Reservation: Not Supported 00:07:44.442 Namespace Sharing Capabilities: Private 00:07:44.442 Size (in LBAs): 1048576 (4GiB) 00:07:44.442 Capacity (in LBAs): 1048576 (4GiB) 00:07:44.442 Utilization (in LBAs): 1048576 (4GiB) 00:07:44.442 Thin Provisioning: Not Supported 00:07:44.442 Per-NS Atomic Units: No 00:07:44.442 Maximum Single Source Range Length: 128 00:07:44.442 Maximum Copy Length: 128 00:07:44.442 Maximum Source Range Count: 128 00:07:44.442 NGUID/EUI64 Never Reused: No 00:07:44.442 Namespace Write Protected: No 00:07:44.442 Number of LBA Formats: 8 00:07:44.442 Current LBA Format: LBA Format #04 00:07:44.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.442 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.442 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.442 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.442 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.442 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.442 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.442 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.442 00:07:44.442 NVM Specific Namespace Data 00:07:44.442 =========================== 00:07:44.442 Logical Block Storage Tag Mask: 0 00:07:44.442 Protection Information Capabilities: 00:07:44.442 16b Guard Protection Information Storage Tag Support: No 00:07:44.442 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.442 Storage Tag Check Read Support: No 00:07:44.442 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.442 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 06:31:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:44.443 06:31:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:44.443 ===================================================== 00:07:44.443 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:44.443 ===================================================== 00:07:44.443 Controller Capabilities/Features 00:07:44.443 ================================ 00:07:44.443 Vendor ID: 1b36 00:07:44.443 Subsystem Vendor ID: 1af4 00:07:44.443 Serial Number: 12343 00:07:44.443 Model Number: QEMU NVMe Ctrl 00:07:44.443 Firmware Version: 8.0.0 00:07:44.443 Recommended Arb Burst: 6 00:07:44.443 IEEE OUI Identifier: 00 54 52 00:07:44.443 Multi-path I/O 00:07:44.443 May have multiple subsystem ports: No 00:07:44.443 May have multiple controllers: Yes 00:07:44.443 Associated with SR-IOV VF: No 00:07:44.443 Max Data Transfer Size: 524288 00:07:44.443 Max Number of Namespaces: 256 00:07:44.443 Max Number of I/O Queues: 64 00:07:44.443 NVMe Specification Version (VS): 1.4 00:07:44.443 NVMe Specification Version (Identify): 1.4 00:07:44.443 Maximum Queue Entries: 2048 00:07:44.443 Contiguous Queues Required: Yes 00:07:44.443 Arbitration Mechanisms Supported 00:07:44.443 Weighted Round Robin: Not Supported 00:07:44.443 Vendor Specific: Not Supported 00:07:44.443 Reset Timeout: 7500 ms 00:07:44.443 Doorbell Stride: 4 bytes 00:07:44.443 NVM Subsystem Reset: Not Supported 00:07:44.443 Command Sets Supported 00:07:44.443 NVM Command Set: Supported 00:07:44.443 Boot Partition: Not Supported 00:07:44.443 Memory Page Size Minimum: 4096 bytes 00:07:44.443 Memory Page Size Maximum: 65536 bytes 00:07:44.443 Persistent Memory Region: Not Supported 00:07:44.443 Optional Asynchronous Events Supported 00:07:44.443 Namespace Attribute Notices: Supported 00:07:44.443 Firmware Activation Notices: Not Supported 00:07:44.443 ANA Change Notices: Not Supported 00:07:44.443 PLE Aggregate Log Change Notices: Not Supported 00:07:44.443 LBA Status Info Alert Notices: Not Supported 00:07:44.443 EGE Aggregate Log Change Notices: Not Supported 00:07:44.443 Normal NVM Subsystem Shutdown event: Not Supported 00:07:44.443 Zone Descriptor Change Notices: Not Supported 00:07:44.443 Discovery Log Change Notices: Not Supported 00:07:44.443 Controller Attributes 00:07:44.443 128-bit Host Identifier: Not Supported 00:07:44.443 Non-Operational Permissive Mode: Not Supported 00:07:44.443 NVM Sets: Not Supported 00:07:44.443 Read Recovery Levels: Not Supported 00:07:44.443 Endurance Groups: Supported 00:07:44.443 Predictable Latency Mode: Not Supported 00:07:44.443 Traffic Based Keep ALive: Not Supported 00:07:44.443 Namespace Granularity: Not Supported 00:07:44.443 SQ Associations: Not Supported 00:07:44.443 UUID List: Not Supported 00:07:44.443 Multi-Domain Subsystem: Not Supported 00:07:44.443 Fixed Capacity Management: Not Supported 00:07:44.443 Variable Capacity Management: Not Supported 00:07:44.443 Delete Endurance Group: Not Supported 00:07:44.443 Delete NVM Set: Not Supported 00:07:44.443 Extended LBA Formats Supported: Supported 00:07:44.443 Flexible Data Placement Supported: Supported 00:07:44.443 00:07:44.443 Controller Memory Buffer Support 00:07:44.443 ================================ 00:07:44.443 Supported: No 00:07:44.443 00:07:44.443 Persistent Memory Region Support 00:07:44.443 ================================ 00:07:44.443 Supported: No 00:07:44.443 00:07:44.443 Admin Command Set Attributes 00:07:44.443 ============================ 00:07:44.443 Security Send/Receive: Not Supported 00:07:44.443 Format NVM: Supported 00:07:44.443 Firmware Activate/Download: Not Supported 00:07:44.443 Namespace Management: Supported 00:07:44.443 Device Self-Test: Not Supported 00:07:44.443 Directives: Supported 00:07:44.443 NVMe-MI: Not Supported 00:07:44.443 Virtualization Management: Not Supported 00:07:44.443 Doorbell Buffer Config: Supported 00:07:44.443 Get LBA Status Capability: Not Supported 00:07:44.443 Command & Feature Lockdown Capability: Not Supported 00:07:44.443 Abort Command Limit: 4 00:07:44.443 Async Event Request Limit: 4 00:07:44.443 Number of Firmware Slots: N/A 00:07:44.443 Firmware Slot 1 Read-Only: N/A 00:07:44.443 Firmware Activation Without Reset: N/A 00:07:44.443 Multiple Update Detection Support: N/A 00:07:44.443 Firmware Update Granularity: No Information Provided 00:07:44.443 Per-Namespace SMART Log: Yes 00:07:44.443 Asymmetric Namespace Access Log Page: Not Supported 00:07:44.443 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:44.443 Command Effects Log Page: Supported 00:07:44.443 Get Log Page Extended Data: Supported 00:07:44.443 Telemetry Log Pages: Not Supported 00:07:44.443 Persistent Event Log Pages: Not Supported 00:07:44.443 Supported Log Pages Log Page: May Support 00:07:44.443 Commands Supported & Effects Log Page: Not Supported 00:07:44.443 Feature Identifiers & Effects Log Page:May Support 00:07:44.443 NVMe-MI Commands & Effects Log Page: May Support 00:07:44.443 Data Area 4 for Telemetry Log: Not Supported 00:07:44.443 Error Log Page Entries Supported: 1 00:07:44.443 Keep Alive: Not Supported 00:07:44.443 00:07:44.443 NVM Command Set Attributes 00:07:44.443 ========================== 00:07:44.443 Submission Queue Entry Size 00:07:44.443 Max: 64 00:07:44.443 Min: 64 00:07:44.443 Completion Queue Entry Size 00:07:44.443 Max: 16 00:07:44.443 Min: 16 00:07:44.443 Number of Namespaces: 256 00:07:44.443 Compare Command: Supported 00:07:44.443 Write Uncorrectable Command: Not Supported 00:07:44.443 Dataset Management Command: Supported 00:07:44.443 Write Zeroes Command: Supported 00:07:44.443 Set Features Save Field: Supported 00:07:44.443 Reservations: Not Supported 00:07:44.443 Timestamp: Supported 00:07:44.443 Copy: Supported 00:07:44.443 Volatile Write Cache: Present 00:07:44.443 Atomic Write Unit (Normal): 1 00:07:44.443 Atomic Write Unit (PFail): 1 00:07:44.443 Atomic Compare & Write Unit: 1 00:07:44.443 Fused Compare & Write: Not Supported 00:07:44.443 Scatter-Gather List 00:07:44.443 SGL Command Set: Supported 00:07:44.443 SGL Keyed: Not Supported 00:07:44.443 SGL Bit Bucket Descriptor: Not Supported 00:07:44.443 SGL Metadata Pointer: Not Supported 00:07:44.443 Oversized SGL: Not Supported 00:07:44.443 SGL Metadata Address: Not Supported 00:07:44.443 SGL Offset: Not Supported 00:07:44.443 Transport SGL Data Block: Not Supported 00:07:44.443 Replay Protected Memory Block: Not Supported 00:07:44.443 00:07:44.443 Firmware Slot Information 00:07:44.443 ========================= 00:07:44.443 Active slot: 1 00:07:44.443 Slot 1 Firmware Revision: 1.0 00:07:44.443 00:07:44.443 00:07:44.443 Commands Supported and Effects 00:07:44.443 ============================== 00:07:44.443 Admin Commands 00:07:44.443 -------------- 00:07:44.443 Delete I/O Submission Queue (00h): Supported 00:07:44.443 Create I/O Submission Queue (01h): Supported 00:07:44.443 Get Log Page (02h): Supported 00:07:44.443 Delete I/O Completion Queue (04h): Supported 00:07:44.443 Create I/O Completion Queue (05h): Supported 00:07:44.443 Identify (06h): Supported 00:07:44.443 Abort (08h): Supported 00:07:44.443 Set Features (09h): Supported 00:07:44.443 Get Features (0Ah): Supported 00:07:44.443 Asynchronous Event Request (0Ch): Supported 00:07:44.443 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:44.443 Directive Send (19h): Supported 00:07:44.443 Directive Receive (1Ah): Supported 00:07:44.443 Virtualization Management (1Ch): Supported 00:07:44.443 Doorbell Buffer Config (7Ch): Supported 00:07:44.443 Format NVM (80h): Supported LBA-Change 00:07:44.443 I/O Commands 00:07:44.443 ------------ 00:07:44.443 Flush (00h): Supported LBA-Change 00:07:44.443 Write (01h): Supported LBA-Change 00:07:44.443 Read (02h): Supported 00:07:44.443 Compare (05h): Supported 00:07:44.443 Write Zeroes (08h): Supported LBA-Change 00:07:44.443 Dataset Management (09h): Supported LBA-Change 00:07:44.443 Unknown (0Ch): Supported 00:07:44.443 Unknown (12h): Supported 00:07:44.443 Copy (19h): Supported LBA-Change 00:07:44.443 Unknown (1Dh): Supported LBA-Change 00:07:44.443 00:07:44.443 Error Log 00:07:44.443 ========= 00:07:44.443 00:07:44.443 Arbitration 00:07:44.443 =========== 00:07:44.444 Arbitration Burst: no limit 00:07:44.444 00:07:44.444 Power Management 00:07:44.444 ================ 00:07:44.444 Number of Power States: 1 00:07:44.444 Current Power State: Power State #0 00:07:44.444 Power State #0: 00:07:44.444 Max Power: 25.00 W 00:07:44.444 Non-Operational State: Operational 00:07:44.444 Entry Latency: 16 microseconds 00:07:44.444 Exit Latency: 4 microseconds 00:07:44.444 Relative Read Throughput: 0 00:07:44.444 Relative Read Latency: 0 00:07:44.444 Relative Write Throughput: 0 00:07:44.444 Relative Write Latency: 0 00:07:44.444 Idle Power: Not Reported 00:07:44.444 Active Power: Not Reported 00:07:44.444 Non-Operational Permissive Mode: Not Supported 00:07:44.444 00:07:44.444 Health Information 00:07:44.444 ================== 00:07:44.444 Critical Warnings: 00:07:44.444 Available Spare Space: OK 00:07:44.444 Temperature: OK 00:07:44.444 Device Reliability: OK 00:07:44.444 Read Only: No 00:07:44.444 Volatile Memory Backup: OK 00:07:44.444 Current Temperature: 323 Kelvin (50 Celsius) 00:07:44.444 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:44.444 Available Spare: 0% 00:07:44.444 Available Spare Threshold: 0% 00:07:44.444 Life Percentage Used: 0% 00:07:44.444 Data Units Read: 863 00:07:44.444 Data Units Written: 792 00:07:44.444 Host Read Commands: 40962 00:07:44.444 Host Write Commands: 40385 00:07:44.444 Controller Busy Time: 0 minutes 00:07:44.444 Power Cycles: 0 00:07:44.444 Power On Hours: 0 hours 00:07:44.444 Unsafe Shutdowns: 0 00:07:44.444 Unrecoverable Media Errors: 0 00:07:44.444 Lifetime Error Log Entries: 0 00:07:44.444 Warning Temperature Time: 0 minutes 00:07:44.444 Critical Temperature Time: 0 minutes 00:07:44.444 00:07:44.444 Number of Queues 00:07:44.444 ================ 00:07:44.444 Number of I/O Submission Queues: 64 00:07:44.444 Number of I/O Completion Queues: 64 00:07:44.444 00:07:44.444 ZNS Specific Controller Data 00:07:44.444 ============================ 00:07:44.444 Zone Append Size Limit: 0 00:07:44.444 00:07:44.444 00:07:44.444 Active Namespaces 00:07:44.444 ================= 00:07:44.444 Namespace ID:1 00:07:44.444 Error Recovery Timeout: Unlimited 00:07:44.444 Command Set Identifier: NVM (00h) 00:07:44.444 Deallocate: Supported 00:07:44.444 Deallocated/Unwritten Error: Supported 00:07:44.444 Deallocated Read Value: All 0x00 00:07:44.444 Deallocate in Write Zeroes: Not Supported 00:07:44.444 Deallocated Guard Field: 0xFFFF 00:07:44.444 Flush: Supported 00:07:44.444 Reservation: Not Supported 00:07:44.444 Namespace Sharing Capabilities: Multiple Controllers 00:07:44.444 Size (in LBAs): 262144 (1GiB) 00:07:44.444 Capacity (in LBAs): 262144 (1GiB) 00:07:44.444 Utilization (in LBAs): 262144 (1GiB) 00:07:44.444 Thin Provisioning: Not Supported 00:07:44.444 Per-NS Atomic Units: No 00:07:44.444 Maximum Single Source Range Length: 128 00:07:44.444 Maximum Copy Length: 128 00:07:44.444 Maximum Source Range Count: 128 00:07:44.444 NGUID/EUI64 Never Reused: No 00:07:44.444 Namespace Write Protected: No 00:07:44.444 Endurance group ID: 1 00:07:44.444 Number of LBA Formats: 8 00:07:44.444 Current LBA Format: LBA Format #04 00:07:44.444 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.444 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.444 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.444 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.444 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.444 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.444 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.444 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.444 00:07:44.444 Get Feature FDP: 00:07:44.444 ================ 00:07:44.444 Enabled: Yes 00:07:44.444 FDP configuration index: 0 00:07:44.444 00:07:44.444 FDP configurations log page 00:07:44.444 =========================== 00:07:44.444 Number of FDP configurations: 1 00:07:44.444 Version: 0 00:07:44.444 Size: 112 00:07:44.444 FDP Configuration Descriptor: 0 00:07:44.444 Descriptor Size: 96 00:07:44.444 Reclaim Group Identifier format: 2 00:07:44.444 FDP Volatile Write Cache: Not Present 00:07:44.444 FDP Configuration: Valid 00:07:44.444 Vendor Specific Size: 0 00:07:44.444 Number of Reclaim Groups: 2 00:07:44.444 Number of Recalim Unit Handles: 8 00:07:44.444 Max Placement Identifiers: 128 00:07:44.444 Number of Namespaces Suppprted: 256 00:07:44.444 Reclaim unit Nominal Size: 6000000 bytes 00:07:44.444 Estimated Reclaim Unit Time Limit: Not Reported 00:07:44.444 RUH Desc #000: RUH Type: Initially Isolated 00:07:44.444 RUH Desc #001: RUH Type: Initially Isolated 00:07:44.444 RUH Desc #002: RUH Type: Initially Isolated 00:07:44.444 RUH Desc #003: RUH Type: Initially Isolated 00:07:44.444 RUH Desc #004: RUH Type: Initially Isolated 00:07:44.444 RUH Desc #005: RUH Type: Initially Isolated 00:07:44.444 RUH Desc #006: RUH Type: Initially Isolated 00:07:44.444 RUH Desc #007: RUH Type: Initially Isolated 00:07:44.444 00:07:44.444 FDP reclaim unit handle usage log page 00:07:44.702 ====================================== 00:07:44.702 Number of Reclaim Unit Handles: 8 00:07:44.702 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:44.702 RUH Usage Desc #001: RUH Attributes: Unused 00:07:44.702 RUH Usage Desc #002: RUH Attributes: Unused 00:07:44.702 RUH Usage Desc #003: RUH Attributes: Unused 00:07:44.702 RUH Usage Desc #004: RUH Attributes: Unused 00:07:44.702 RUH Usage Desc #005: RUH Attributes: Unused 00:07:44.702 RUH Usage Desc #006: RUH Attributes: Unused 00:07:44.702 RUH Usage Desc #007: RUH Attributes: Unused 00:07:44.702 00:07:44.702 FDP statistics log page 00:07:44.702 ======================= 00:07:44.702 Host bytes with metadata written: 509255680 00:07:44.702 Media bytes with metadata written: 509313024 00:07:44.702 Media bytes erased: 0 00:07:44.702 00:07:44.702 FDP events log page 00:07:44.702 =================== 00:07:44.702 Number of FDP events: 0 00:07:44.702 00:07:44.702 NVM Specific Namespace Data 00:07:44.702 =========================== 00:07:44.702 Logical Block Storage Tag Mask: 0 00:07:44.702 Protection Information Capabilities: 00:07:44.702 16b Guard Protection Information Storage Tag Support: No 00:07:44.702 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.702 Storage Tag Check Read Support: No 00:07:44.702 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.702 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.702 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.702 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.702 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.702 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.702 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.702 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.702 00:07:44.702 real 0m1.294s 00:07:44.702 user 0m0.482s 00:07:44.702 sys 0m0.586s 00:07:44.702 06:31:36 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.702 ************************************ 00:07:44.702 END TEST nvme_identify 00:07:44.702 ************************************ 00:07:44.702 06:31:36 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:44.702 06:31:36 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:44.702 06:31:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.702 06:31:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.702 06:31:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.702 ************************************ 00:07:44.702 START TEST nvme_perf 00:07:44.702 ************************************ 00:07:44.702 06:31:36 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:44.702 06:31:36 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:46.077 Initializing NVMe Controllers 00:07:46.077 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:46.077 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:46.077 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:46.077 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:46.077 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:46.077 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:46.077 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:46.077 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:46.077 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:46.077 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:46.077 Initialization complete. Launching workers. 00:07:46.077 ======================================================== 00:07:46.077 Latency(us) 00:07:46.077 Device Information : IOPS MiB/s Average min max 00:07:46.077 PCIE (0000:00:10.0) NSID 1 from core 0: 18469.26 216.44 6939.47 5919.68 36096.87 00:07:46.077 PCIE (0000:00:11.0) NSID 1 from core 0: 18469.26 216.44 6928.86 5987.40 34169.07 00:07:46.077 PCIE (0000:00:13.0) NSID 1 from core 0: 18469.26 216.44 6917.09 5963.42 32583.67 00:07:46.077 PCIE (0000:00:12.0) NSID 1 from core 0: 18469.26 216.44 6904.78 5982.15 30525.12 00:07:46.077 PCIE (0000:00:12.0) NSID 2 from core 0: 18469.26 216.44 6891.88 5992.68 28523.78 00:07:46.077 PCIE (0000:00:12.0) NSID 3 from core 0: 18533.16 217.19 6855.53 5988.47 22907.88 00:07:46.077 ======================================================== 00:07:46.077 Total : 110879.45 1299.37 6906.24 5919.68 36096.87 00:07:46.077 00:07:46.077 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:46.077 ================================================================================= 00:07:46.077 1.00000% : 5999.065us 00:07:46.077 10.00000% : 6125.095us 00:07:46.077 25.00000% : 6326.745us 00:07:46.077 50.00000% : 6654.425us 00:07:46.077 75.00000% : 6956.898us 00:07:46.077 90.00000% : 7158.548us 00:07:46.077 95.00000% : 8318.031us 00:07:46.077 98.00000% : 10485.760us 00:07:46.077 99.00000% : 12804.726us 00:07:46.077 99.50000% : 30650.683us 00:07:46.077 99.90000% : 35691.914us 00:07:46.077 99.99000% : 36095.212us 00:07:46.077 99.99900% : 36296.862us 00:07:46.077 99.99990% : 36296.862us 00:07:46.077 99.99999% : 36296.862us 00:07:46.077 00:07:46.077 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:46.077 ================================================================================= 00:07:46.077 1.00000% : 6074.683us 00:07:46.077 10.00000% : 6200.714us 00:07:46.077 25.00000% : 6351.951us 00:07:46.077 50.00000% : 6654.425us 00:07:46.077 75.00000% : 6906.486us 00:07:46.077 90.00000% : 7108.135us 00:07:46.077 95.00000% : 8318.031us 00:07:46.077 98.00000% : 10889.058us 00:07:46.077 99.00000% : 12754.314us 00:07:46.077 99.50000% : 28835.840us 00:07:46.077 99.90000% : 33877.071us 00:07:46.077 99.99000% : 34280.369us 00:07:46.077 99.99900% : 34280.369us 00:07:46.077 99.99990% : 34280.369us 00:07:46.077 99.99999% : 34280.369us 00:07:46.077 00:07:46.077 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:46.077 ================================================================================= 00:07:46.077 1.00000% : 6074.683us 00:07:46.077 10.00000% : 6200.714us 00:07:46.077 25.00000% : 6351.951us 00:07:46.077 50.00000% : 6604.012us 00:07:46.077 75.00000% : 6906.486us 00:07:46.077 90.00000% : 7108.135us 00:07:46.077 95.00000% : 8570.092us 00:07:46.077 98.00000% : 11141.120us 00:07:46.077 99.00000% : 12855.138us 00:07:46.077 99.50000% : 27222.646us 00:07:46.077 99.90000% : 32263.877us 00:07:46.077 99.99000% : 32667.175us 00:07:46.077 99.99900% : 32667.175us 00:07:46.077 99.99990% : 32667.175us 00:07:46.077 99.99999% : 32667.175us 00:07:46.077 00:07:46.077 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:46.077 ================================================================================= 00:07:46.077 1.00000% : 6074.683us 00:07:46.077 10.00000% : 6200.714us 00:07:46.077 25.00000% : 6351.951us 00:07:46.077 50.00000% : 6604.012us 00:07:46.077 75.00000% : 6856.074us 00:07:46.077 90.00000% : 7057.723us 00:07:46.077 95.00000% : 8620.505us 00:07:46.077 98.00000% : 11040.295us 00:07:46.077 99.00000% : 12703.902us 00:07:46.077 99.50000% : 25105.329us 00:07:46.077 99.90000% : 30247.385us 00:07:46.077 99.99000% : 30650.683us 00:07:46.077 99.99900% : 30650.683us 00:07:46.077 99.99990% : 30650.683us 00:07:46.077 99.99999% : 30650.683us 00:07:46.077 00:07:46.077 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:46.077 ================================================================================= 00:07:46.077 1.00000% : 6099.889us 00:07:46.077 10.00000% : 6200.714us 00:07:46.077 25.00000% : 6351.951us 00:07:46.077 50.00000% : 6604.012us 00:07:46.077 75.00000% : 6906.486us 00:07:46.077 90.00000% : 7057.723us 00:07:46.077 95.00000% : 8620.505us 00:07:46.077 98.00000% : 10636.997us 00:07:46.077 99.00000% : 13006.375us 00:07:46.077 99.50000% : 22988.012us 00:07:46.077 99.90000% : 28230.892us 00:07:46.077 99.99000% : 28634.191us 00:07:46.077 99.99900% : 28634.191us 00:07:46.077 99.99990% : 28634.191us 00:07:46.077 99.99999% : 28634.191us 00:07:46.077 00:07:46.077 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:46.077 ================================================================================= 00:07:46.077 1.00000% : 6099.889us 00:07:46.077 10.00000% : 6200.714us 00:07:46.077 25.00000% : 6351.951us 00:07:46.077 50.00000% : 6654.425us 00:07:46.077 75.00000% : 6906.486us 00:07:46.077 90.00000% : 7108.135us 00:07:46.077 95.00000% : 8469.268us 00:07:46.077 98.00000% : 10485.760us 00:07:46.077 99.00000% : 12905.551us 00:07:46.077 99.50000% : 17442.658us 00:07:46.077 99.90000% : 22483.889us 00:07:46.077 99.99000% : 22887.188us 00:07:46.077 99.99900% : 22988.012us 00:07:46.077 99.99990% : 22988.012us 00:07:46.077 99.99999% : 22988.012us 00:07:46.077 00:07:46.077 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:46.077 ============================================================================== 00:07:46.077 Range in us Cumulative IO count 00:07:46.077 5898.240 - 5923.446: 0.0162% ( 3) 00:07:46.077 5923.446 - 5948.652: 0.1027% ( 16) 00:07:46.077 5948.652 - 5973.858: 0.4704% ( 68) 00:07:46.077 5973.858 - 5999.065: 1.1732% ( 130) 00:07:46.077 5999.065 - 6024.271: 2.5573% ( 256) 00:07:46.077 6024.271 - 6049.477: 4.3901% ( 339) 00:07:46.077 6049.477 - 6074.683: 6.4230% ( 376) 00:07:46.077 6074.683 - 6099.889: 8.3369% ( 354) 00:07:46.077 6099.889 - 6125.095: 10.2455% ( 353) 00:07:46.077 6125.095 - 6150.302: 12.2729% ( 375) 00:07:46.077 6150.302 - 6175.508: 14.2788% ( 371) 00:07:46.077 6175.508 - 6200.714: 16.2197% ( 359) 00:07:46.077 6200.714 - 6225.920: 18.3175% ( 388) 00:07:46.077 6225.920 - 6251.126: 20.2044% ( 349) 00:07:46.077 6251.126 - 6276.332: 22.1399% ( 358) 00:07:46.077 6276.332 - 6301.538: 24.0863% ( 360) 00:07:46.077 6301.538 - 6326.745: 26.0921% ( 371) 00:07:46.077 6326.745 - 6351.951: 28.1358% ( 378) 00:07:46.077 6351.951 - 6377.157: 30.2984% ( 400) 00:07:46.077 6377.157 - 6402.363: 32.3097% ( 372) 00:07:46.077 6402.363 - 6427.569: 34.4399% ( 394) 00:07:46.077 6427.569 - 6452.775: 36.4782% ( 377) 00:07:46.077 6452.775 - 6503.188: 40.7331% ( 787) 00:07:46.077 6503.188 - 6553.600: 44.9557% ( 781) 00:07:46.077 6553.600 - 6604.012: 49.1458% ( 775) 00:07:46.077 6604.012 - 6654.425: 53.3413% ( 776) 00:07:46.077 6654.425 - 6704.837: 57.4827% ( 766) 00:07:46.077 6704.837 - 6755.249: 61.6620% ( 773) 00:07:46.077 6755.249 - 6805.662: 65.9278% ( 789) 00:07:46.077 6805.662 - 6856.074: 70.0368% ( 760) 00:07:46.077 6856.074 - 6906.486: 74.3404% ( 796) 00:07:46.077 6906.486 - 6956.898: 78.4440% ( 759) 00:07:46.077 6956.898 - 7007.311: 82.7638% ( 799) 00:07:46.077 7007.311 - 7057.723: 86.4998% ( 691) 00:07:46.077 7057.723 - 7108.135: 88.9706% ( 457) 00:07:46.077 7108.135 - 7158.548: 90.3060% ( 247) 00:07:46.078 7158.548 - 7208.960: 90.9872% ( 126) 00:07:46.078 7208.960 - 7259.372: 91.4252% ( 81) 00:07:46.078 7259.372 - 7309.785: 91.7442% ( 59) 00:07:46.078 7309.785 - 7360.197: 92.1118% ( 68) 00:07:46.078 7360.197 - 7410.609: 92.3875% ( 51) 00:07:46.078 7410.609 - 7461.022: 92.6471% ( 48) 00:07:46.078 7461.022 - 7511.434: 92.8579% ( 39) 00:07:46.078 7511.434 - 7561.846: 92.9985% ( 26) 00:07:46.078 7561.846 - 7612.258: 93.1553% ( 29) 00:07:46.078 7612.258 - 7662.671: 93.2904% ( 25) 00:07:46.078 7662.671 - 7713.083: 93.4310% ( 26) 00:07:46.078 7713.083 - 7763.495: 93.5716% ( 26) 00:07:46.078 7763.495 - 7813.908: 93.7122% ( 26) 00:07:46.078 7813.908 - 7864.320: 93.8798% ( 31) 00:07:46.078 7864.320 - 7914.732: 94.0311% ( 28) 00:07:46.078 7914.732 - 7965.145: 94.1879% ( 29) 00:07:46.078 7965.145 - 8015.557: 94.3177% ( 24) 00:07:46.078 8015.557 - 8065.969: 94.4907% ( 32) 00:07:46.078 8065.969 - 8116.382: 94.6475% ( 29) 00:07:46.078 8116.382 - 8166.794: 94.7881% ( 26) 00:07:46.078 8166.794 - 8217.206: 94.9070% ( 22) 00:07:46.078 8217.206 - 8267.618: 94.9989% ( 17) 00:07:46.078 8267.618 - 8318.031: 95.0800% ( 15) 00:07:46.078 8318.031 - 8368.443: 95.1503% ( 13) 00:07:46.078 8368.443 - 8418.855: 95.2422% ( 17) 00:07:46.078 8418.855 - 8469.268: 95.3179% ( 14) 00:07:46.078 8469.268 - 8519.680: 95.3882% ( 13) 00:07:46.078 8519.680 - 8570.092: 95.4477% ( 11) 00:07:46.078 8570.092 - 8620.505: 95.5396% ( 17) 00:07:46.078 8620.505 - 8670.917: 95.6045% ( 12) 00:07:46.078 8670.917 - 8721.329: 95.6747% ( 13) 00:07:46.078 8721.329 - 8771.742: 95.7342% ( 11) 00:07:46.078 8771.742 - 8822.154: 95.7937% ( 11) 00:07:46.078 8822.154 - 8872.566: 95.8640% ( 13) 00:07:46.078 8872.566 - 8922.978: 95.9343% ( 13) 00:07:46.078 8922.978 - 8973.391: 95.9829% ( 9) 00:07:46.078 8973.391 - 9023.803: 95.9991% ( 3) 00:07:46.078 9023.803 - 9074.215: 96.0424% ( 8) 00:07:46.078 9074.215 - 9124.628: 96.1019% ( 11) 00:07:46.078 9124.628 - 9175.040: 96.1776% ( 14) 00:07:46.078 9175.040 - 9225.452: 96.2478% ( 13) 00:07:46.078 9225.452 - 9275.865: 96.3235% ( 14) 00:07:46.078 9275.865 - 9326.277: 96.4046% ( 15) 00:07:46.078 9326.277 - 9376.689: 96.4857% ( 15) 00:07:46.078 9376.689 - 9427.102: 96.5722% ( 16) 00:07:46.078 9427.102 - 9477.514: 96.6479% ( 14) 00:07:46.078 9477.514 - 9527.926: 96.7344% ( 16) 00:07:46.078 9527.926 - 9578.338: 96.8101% ( 14) 00:07:46.078 9578.338 - 9628.751: 96.8858% ( 14) 00:07:46.078 9628.751 - 9679.163: 96.9669% ( 15) 00:07:46.078 9679.163 - 9729.575: 97.0480% ( 15) 00:07:46.078 9729.575 - 9779.988: 97.1345% ( 16) 00:07:46.078 9779.988 - 9830.400: 97.1994% ( 12) 00:07:46.078 9830.400 - 9880.812: 97.2805% ( 15) 00:07:46.078 9880.812 - 9931.225: 97.3616% ( 15) 00:07:46.078 9931.225 - 9981.637: 97.4373% ( 14) 00:07:46.078 9981.637 - 10032.049: 97.5184% ( 15) 00:07:46.078 10032.049 - 10082.462: 97.5941% ( 14) 00:07:46.078 10082.462 - 10132.874: 97.6806% ( 16) 00:07:46.078 10132.874 - 10183.286: 97.7455% ( 12) 00:07:46.078 10183.286 - 10233.698: 97.7887% ( 8) 00:07:46.078 10233.698 - 10284.111: 97.8482% ( 11) 00:07:46.078 10284.111 - 10334.523: 97.8968% ( 9) 00:07:46.078 10334.523 - 10384.935: 97.9401% ( 8) 00:07:46.078 10384.935 - 10435.348: 97.9779% ( 7) 00:07:46.078 10435.348 - 10485.760: 98.0050% ( 5) 00:07:46.078 10485.760 - 10536.172: 98.0212% ( 3) 00:07:46.078 10536.172 - 10586.585: 98.0374% ( 3) 00:07:46.078 10586.585 - 10636.997: 98.0536% ( 3) 00:07:46.078 10636.997 - 10687.409: 98.0699% ( 3) 00:07:46.078 10687.409 - 10737.822: 98.0861% ( 3) 00:07:46.078 10737.822 - 10788.234: 98.0915% ( 1) 00:07:46.078 10788.234 - 10838.646: 98.1077% ( 3) 00:07:46.078 10838.646 - 10889.058: 98.1293% ( 4) 00:07:46.078 10889.058 - 10939.471: 98.1401% ( 2) 00:07:46.078 10939.471 - 10989.883: 98.1564% ( 3) 00:07:46.078 10989.883 - 11040.295: 98.1726% ( 3) 00:07:46.078 11040.295 - 11090.708: 98.1942% ( 4) 00:07:46.078 11090.708 - 11141.120: 98.2050% ( 2) 00:07:46.078 11141.120 - 11191.532: 98.2212% ( 3) 00:07:46.078 11191.532 - 11241.945: 98.2321% ( 2) 00:07:46.078 11241.945 - 11292.357: 98.2591% ( 5) 00:07:46.078 11292.357 - 11342.769: 98.2969% ( 7) 00:07:46.078 11342.769 - 11393.182: 98.3131% ( 3) 00:07:46.078 11393.182 - 11443.594: 98.3240% ( 2) 00:07:46.078 11443.594 - 11494.006: 98.3402% ( 3) 00:07:46.078 11494.006 - 11544.418: 98.3564% ( 3) 00:07:46.078 11544.418 - 11594.831: 98.3672% ( 2) 00:07:46.078 11594.831 - 11645.243: 98.3834% ( 3) 00:07:46.078 11645.243 - 11695.655: 98.3942% ( 2) 00:07:46.078 11695.655 - 11746.068: 98.4105% ( 3) 00:07:46.078 11746.068 - 11796.480: 98.4267% ( 3) 00:07:46.078 11796.480 - 11846.892: 98.4375% ( 2) 00:07:46.078 11846.892 - 11897.305: 98.4537% ( 3) 00:07:46.078 11897.305 - 11947.717: 98.4699% ( 3) 00:07:46.078 11947.717 - 11998.129: 98.4970% ( 5) 00:07:46.078 11998.129 - 12048.542: 98.5294% ( 6) 00:07:46.078 12048.542 - 12098.954: 98.5619% ( 6) 00:07:46.078 12098.954 - 12149.366: 98.5835% ( 4) 00:07:46.078 12149.366 - 12199.778: 98.6213% ( 7) 00:07:46.078 12199.778 - 12250.191: 98.6484% ( 5) 00:07:46.078 12250.191 - 12300.603: 98.6808% ( 6) 00:07:46.078 12300.603 - 12351.015: 98.7240% ( 8) 00:07:46.078 12351.015 - 12401.428: 98.7619% ( 7) 00:07:46.078 12401.428 - 12451.840: 98.8106% ( 9) 00:07:46.078 12451.840 - 12502.252: 98.8430% ( 6) 00:07:46.078 12502.252 - 12552.665: 98.8700% ( 5) 00:07:46.078 12552.665 - 12603.077: 98.9025% ( 6) 00:07:46.078 12603.077 - 12653.489: 98.9241% ( 4) 00:07:46.078 12653.489 - 12703.902: 98.9619% ( 7) 00:07:46.078 12703.902 - 12754.314: 98.9890% ( 5) 00:07:46.078 12754.314 - 12804.726: 99.0160% ( 5) 00:07:46.078 12804.726 - 12855.138: 99.0484% ( 6) 00:07:46.078 12855.138 - 12905.551: 99.0755% ( 5) 00:07:46.078 12905.551 - 13006.375: 99.1512% ( 14) 00:07:46.078 13006.375 - 13107.200: 99.1890% ( 7) 00:07:46.078 13107.200 - 13208.025: 99.2160% ( 5) 00:07:46.078 13208.025 - 13308.849: 99.2431% ( 5) 00:07:46.078 13308.849 - 13409.674: 99.2755% ( 6) 00:07:46.078 13409.674 - 13510.498: 99.3026% ( 5) 00:07:46.078 13510.498 - 13611.323: 99.3080% ( 1) 00:07:46.078 29440.788 - 29642.437: 99.3242% ( 3) 00:07:46.078 29642.437 - 29844.086: 99.3620% ( 7) 00:07:46.078 29844.086 - 30045.735: 99.4053% ( 8) 00:07:46.078 30045.735 - 30247.385: 99.4431% ( 7) 00:07:46.078 30247.385 - 30449.034: 99.4810% ( 7) 00:07:46.078 30449.034 - 30650.683: 99.5242% ( 8) 00:07:46.078 30650.683 - 30852.332: 99.5621% ( 7) 00:07:46.078 30852.332 - 31053.982: 99.5999% ( 7) 00:07:46.078 31053.982 - 31255.631: 99.6432% ( 8) 00:07:46.078 31255.631 - 31457.280: 99.6540% ( 2) 00:07:46.078 34280.369 - 34482.018: 99.6918% ( 7) 00:07:46.078 34482.018 - 34683.668: 99.7297% ( 7) 00:07:46.078 34683.668 - 34885.317: 99.7675% ( 7) 00:07:46.078 34885.317 - 35086.966: 99.8054% ( 7) 00:07:46.078 35086.966 - 35288.615: 99.8432% ( 7) 00:07:46.078 35288.615 - 35490.265: 99.8811% ( 7) 00:07:46.078 35490.265 - 35691.914: 99.9189% ( 7) 00:07:46.078 35691.914 - 35893.563: 99.9567% ( 7) 00:07:46.078 35893.563 - 36095.212: 99.9946% ( 7) 00:07:46.078 36095.212 - 36296.862: 100.0000% ( 1) 00:07:46.078 00:07:46.078 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:46.078 ============================================================================== 00:07:46.078 Range in us Cumulative IO count 00:07:46.078 5973.858 - 5999.065: 0.0108% ( 2) 00:07:46.078 5999.065 - 6024.271: 0.1135% ( 19) 00:07:46.078 6024.271 - 6049.477: 0.4379% ( 60) 00:07:46.078 6049.477 - 6074.683: 1.1408% ( 130) 00:07:46.078 6074.683 - 6099.889: 2.5573% ( 262) 00:07:46.078 6099.889 - 6125.095: 4.4712% ( 354) 00:07:46.078 6125.095 - 6150.302: 6.7096% ( 414) 00:07:46.078 6150.302 - 6175.508: 8.8992% ( 405) 00:07:46.078 6175.508 - 6200.714: 11.1267% ( 412) 00:07:46.078 6200.714 - 6225.920: 13.3867% ( 418) 00:07:46.078 6225.920 - 6251.126: 15.7385% ( 435) 00:07:46.078 6251.126 - 6276.332: 18.1066% ( 438) 00:07:46.078 6276.332 - 6301.538: 20.5450% ( 451) 00:07:46.078 6301.538 - 6326.745: 22.9077% ( 437) 00:07:46.078 6326.745 - 6351.951: 25.2325% ( 430) 00:07:46.078 6351.951 - 6377.157: 27.6654% ( 450) 00:07:46.078 6377.157 - 6402.363: 30.0660% ( 444) 00:07:46.078 6402.363 - 6427.569: 32.5043% ( 451) 00:07:46.078 6427.569 - 6452.775: 34.9643% ( 455) 00:07:46.078 6452.775 - 6503.188: 39.8735% ( 908) 00:07:46.078 6503.188 - 6553.600: 44.9070% ( 931) 00:07:46.078 6553.600 - 6604.012: 49.7513% ( 896) 00:07:46.078 6604.012 - 6654.425: 54.5902% ( 895) 00:07:46.078 6654.425 - 6704.837: 59.5210% ( 912) 00:07:46.078 6704.837 - 6755.249: 64.4031% ( 903) 00:07:46.078 6755.249 - 6805.662: 69.2853% ( 903) 00:07:46.078 6805.662 - 6856.074: 74.2269% ( 914) 00:07:46.078 6856.074 - 6906.486: 79.1414% ( 909) 00:07:46.078 6906.486 - 6956.898: 83.8073% ( 863) 00:07:46.078 6956.898 - 7007.311: 87.4405% ( 672) 00:07:46.078 7007.311 - 7057.723: 89.3761% ( 358) 00:07:46.078 7057.723 - 7108.135: 90.3709% ( 184) 00:07:46.078 7108.135 - 7158.548: 90.8953% ( 97) 00:07:46.078 7158.548 - 7208.960: 91.2305% ( 62) 00:07:46.078 7208.960 - 7259.372: 91.5820% ( 65) 00:07:46.078 7259.372 - 7309.785: 91.9118% ( 61) 00:07:46.078 7309.785 - 7360.197: 92.1875% ( 51) 00:07:46.078 7360.197 - 7410.609: 92.3875% ( 37) 00:07:46.079 7410.609 - 7461.022: 92.5227% ( 25) 00:07:46.079 7461.022 - 7511.434: 92.6417% ( 22) 00:07:46.079 7511.434 - 7561.846: 92.7768% ( 25) 00:07:46.079 7561.846 - 7612.258: 92.9823% ( 38) 00:07:46.079 7612.258 - 7662.671: 93.1661% ( 34) 00:07:46.079 7662.671 - 7713.083: 93.3824% ( 40) 00:07:46.079 7713.083 - 7763.495: 93.5554% ( 32) 00:07:46.079 7763.495 - 7813.908: 93.7338% ( 33) 00:07:46.079 7813.908 - 7864.320: 93.8906% ( 29) 00:07:46.079 7864.320 - 7914.732: 94.0582% ( 31) 00:07:46.079 7914.732 - 7965.145: 94.2204% ( 30) 00:07:46.079 7965.145 - 8015.557: 94.3934% ( 32) 00:07:46.079 8015.557 - 8065.969: 94.5123% ( 22) 00:07:46.079 8065.969 - 8116.382: 94.6421% ( 24) 00:07:46.079 8116.382 - 8166.794: 94.7448% ( 19) 00:07:46.079 8166.794 - 8217.206: 94.8529% ( 20) 00:07:46.079 8217.206 - 8267.618: 94.9557% ( 19) 00:07:46.079 8267.618 - 8318.031: 95.0584% ( 19) 00:07:46.079 8318.031 - 8368.443: 95.1503% ( 17) 00:07:46.079 8368.443 - 8418.855: 95.2260% ( 14) 00:07:46.079 8418.855 - 8469.268: 95.3287% ( 19) 00:07:46.079 8469.268 - 8519.680: 95.3882% ( 11) 00:07:46.079 8519.680 - 8570.092: 95.4693% ( 15) 00:07:46.079 8570.092 - 8620.505: 95.5017% ( 6) 00:07:46.079 8620.505 - 8670.917: 95.5396% ( 7) 00:07:46.079 8670.917 - 8721.329: 95.5882% ( 9) 00:07:46.079 8721.329 - 8771.742: 95.6369% ( 9) 00:07:46.079 8771.742 - 8822.154: 95.7072% ( 13) 00:07:46.079 8822.154 - 8872.566: 95.7612% ( 10) 00:07:46.079 8872.566 - 8922.978: 95.8045% ( 8) 00:07:46.079 8922.978 - 8973.391: 95.8640% ( 11) 00:07:46.079 8973.391 - 9023.803: 95.9343% ( 13) 00:07:46.079 9023.803 - 9074.215: 95.9991% ( 12) 00:07:46.079 9074.215 - 9124.628: 96.0856% ( 16) 00:07:46.079 9124.628 - 9175.040: 96.1721% ( 16) 00:07:46.079 9175.040 - 9225.452: 96.2695% ( 18) 00:07:46.079 9225.452 - 9275.865: 96.3614% ( 17) 00:07:46.079 9275.865 - 9326.277: 96.4425% ( 15) 00:07:46.079 9326.277 - 9376.689: 96.5290% ( 16) 00:07:46.079 9376.689 - 9427.102: 96.6155% ( 16) 00:07:46.079 9427.102 - 9477.514: 96.6912% ( 14) 00:07:46.079 9477.514 - 9527.926: 96.7885% ( 18) 00:07:46.079 9527.926 - 9578.338: 96.8804% ( 17) 00:07:46.079 9578.338 - 9628.751: 96.9561% ( 14) 00:07:46.079 9628.751 - 9679.163: 97.0210% ( 12) 00:07:46.079 9679.163 - 9729.575: 97.0967% ( 14) 00:07:46.079 9729.575 - 9779.988: 97.1778% ( 15) 00:07:46.079 9779.988 - 9830.400: 97.2805% ( 19) 00:07:46.079 9830.400 - 9880.812: 97.3616% ( 15) 00:07:46.079 9880.812 - 9931.225: 97.4535% ( 17) 00:07:46.079 9931.225 - 9981.637: 97.5022% ( 9) 00:07:46.079 9981.637 - 10032.049: 97.5616% ( 11) 00:07:46.079 10032.049 - 10082.462: 97.6103% ( 9) 00:07:46.079 10082.462 - 10132.874: 97.6590% ( 9) 00:07:46.079 10132.874 - 10183.286: 97.6914% ( 6) 00:07:46.079 10183.286 - 10233.698: 97.7346% ( 8) 00:07:46.079 10233.698 - 10284.111: 97.7671% ( 6) 00:07:46.079 10284.111 - 10334.523: 97.8049% ( 7) 00:07:46.079 10334.523 - 10384.935: 97.8266% ( 4) 00:07:46.079 10384.935 - 10435.348: 97.8428% ( 3) 00:07:46.079 10435.348 - 10485.760: 97.8644% ( 4) 00:07:46.079 10485.760 - 10536.172: 97.8806% ( 3) 00:07:46.079 10536.172 - 10586.585: 97.9022% ( 4) 00:07:46.079 10586.585 - 10636.997: 97.9185% ( 3) 00:07:46.079 10636.997 - 10687.409: 97.9401% ( 4) 00:07:46.079 10687.409 - 10737.822: 97.9563% ( 3) 00:07:46.079 10737.822 - 10788.234: 97.9779% ( 4) 00:07:46.079 10788.234 - 10838.646: 97.9942% ( 3) 00:07:46.079 10838.646 - 10889.058: 98.0158% ( 4) 00:07:46.079 10889.058 - 10939.471: 98.0374% ( 4) 00:07:46.079 10939.471 - 10989.883: 98.0536% ( 3) 00:07:46.079 10989.883 - 11040.295: 98.0699% ( 3) 00:07:46.079 11040.295 - 11090.708: 98.0915% ( 4) 00:07:46.079 11090.708 - 11141.120: 98.1077% ( 3) 00:07:46.079 11141.120 - 11191.532: 98.1455% ( 7) 00:07:46.079 11191.532 - 11241.945: 98.1726% ( 5) 00:07:46.079 11241.945 - 11292.357: 98.2104% ( 7) 00:07:46.079 11292.357 - 11342.769: 98.2483% ( 7) 00:07:46.079 11342.769 - 11393.182: 98.2807% ( 6) 00:07:46.079 11393.182 - 11443.594: 98.3186% ( 7) 00:07:46.079 11443.594 - 11494.006: 98.3510% ( 6) 00:07:46.079 11494.006 - 11544.418: 98.3888% ( 7) 00:07:46.079 11544.418 - 11594.831: 98.4159% ( 5) 00:07:46.079 11594.831 - 11645.243: 98.4321% ( 3) 00:07:46.079 11645.243 - 11695.655: 98.4483% ( 3) 00:07:46.079 11695.655 - 11746.068: 98.4645% ( 3) 00:07:46.079 11746.068 - 11796.480: 98.4808% ( 3) 00:07:46.079 11796.480 - 11846.892: 98.4970% ( 3) 00:07:46.079 11846.892 - 11897.305: 98.5132% ( 3) 00:07:46.079 11897.305 - 11947.717: 98.5294% ( 3) 00:07:46.079 11947.717 - 11998.129: 98.5510% ( 4) 00:07:46.079 11998.129 - 12048.542: 98.5673% ( 3) 00:07:46.079 12048.542 - 12098.954: 98.5889% ( 4) 00:07:46.079 12098.954 - 12149.366: 98.6213% ( 6) 00:07:46.079 12149.366 - 12199.778: 98.6592% ( 7) 00:07:46.079 12199.778 - 12250.191: 98.6754% ( 3) 00:07:46.079 12250.191 - 12300.603: 98.6916% ( 3) 00:07:46.079 12300.603 - 12351.015: 98.7240% ( 6) 00:07:46.079 12351.015 - 12401.428: 98.7565% ( 6) 00:07:46.079 12401.428 - 12451.840: 98.7943% ( 7) 00:07:46.079 12451.840 - 12502.252: 98.8322% ( 7) 00:07:46.079 12502.252 - 12552.665: 98.8646% ( 6) 00:07:46.079 12552.665 - 12603.077: 98.9025% ( 7) 00:07:46.079 12603.077 - 12653.489: 98.9349% ( 6) 00:07:46.079 12653.489 - 12703.902: 98.9728% ( 7) 00:07:46.079 12703.902 - 12754.314: 99.0052% ( 6) 00:07:46.079 12754.314 - 12804.726: 99.0376% ( 6) 00:07:46.079 12804.726 - 12855.138: 99.0755% ( 7) 00:07:46.079 12855.138 - 12905.551: 99.1133% ( 7) 00:07:46.079 12905.551 - 13006.375: 99.1836% ( 13) 00:07:46.079 13006.375 - 13107.200: 99.2269% ( 8) 00:07:46.079 13107.200 - 13208.025: 99.2593% ( 6) 00:07:46.079 13208.025 - 13308.849: 99.2971% ( 7) 00:07:46.079 13308.849 - 13409.674: 99.3080% ( 2) 00:07:46.079 27625.945 - 27827.594: 99.3296% ( 4) 00:07:46.079 27827.594 - 28029.243: 99.3728% ( 8) 00:07:46.079 28029.243 - 28230.892: 99.4107% ( 7) 00:07:46.079 28230.892 - 28432.542: 99.4539% ( 8) 00:07:46.079 28432.542 - 28634.191: 99.4972% ( 8) 00:07:46.079 28634.191 - 28835.840: 99.5350% ( 7) 00:07:46.079 28835.840 - 29037.489: 99.5783% ( 8) 00:07:46.079 29037.489 - 29239.138: 99.6161% ( 7) 00:07:46.079 29239.138 - 29440.788: 99.6540% ( 7) 00:07:46.079 32465.526 - 32667.175: 99.6918% ( 7) 00:07:46.079 32667.175 - 32868.825: 99.7351% ( 8) 00:07:46.079 32868.825 - 33070.474: 99.7729% ( 7) 00:07:46.079 33070.474 - 33272.123: 99.8162% ( 8) 00:07:46.079 33272.123 - 33473.772: 99.8594% ( 8) 00:07:46.079 33473.772 - 33675.422: 99.8973% ( 7) 00:07:46.079 33675.422 - 33877.071: 99.9351% ( 7) 00:07:46.079 33877.071 - 34078.720: 99.9784% ( 8) 00:07:46.079 34078.720 - 34280.369: 100.0000% ( 4) 00:07:46.079 00:07:46.079 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:46.079 ============================================================================== 00:07:46.079 Range in us Cumulative IO count 00:07:46.079 5948.652 - 5973.858: 0.0054% ( 1) 00:07:46.079 5973.858 - 5999.065: 0.0324% ( 5) 00:07:46.079 5999.065 - 6024.271: 0.2325% ( 37) 00:07:46.079 6024.271 - 6049.477: 0.5190% ( 53) 00:07:46.079 6049.477 - 6074.683: 1.2435% ( 134) 00:07:46.079 6074.683 - 6099.889: 2.5735% ( 246) 00:07:46.079 6099.889 - 6125.095: 4.4983% ( 356) 00:07:46.079 6125.095 - 6150.302: 6.7204% ( 411) 00:07:46.079 6150.302 - 6175.508: 9.0128% ( 424) 00:07:46.079 6175.508 - 6200.714: 11.3430% ( 431) 00:07:46.079 6200.714 - 6225.920: 13.8516% ( 464) 00:07:46.079 6225.920 - 6251.126: 16.2197% ( 438) 00:07:46.079 6251.126 - 6276.332: 18.5986% ( 440) 00:07:46.079 6276.332 - 6301.538: 20.9829% ( 441) 00:07:46.079 6301.538 - 6326.745: 23.3672% ( 441) 00:07:46.079 6326.745 - 6351.951: 25.6542% ( 423) 00:07:46.079 6351.951 - 6377.157: 27.9628% ( 427) 00:07:46.079 6377.157 - 6402.363: 30.4174% ( 454) 00:07:46.079 6402.363 - 6427.569: 32.8558% ( 451) 00:07:46.079 6427.569 - 6452.775: 35.4563% ( 481) 00:07:46.079 6452.775 - 6503.188: 40.5439% ( 941) 00:07:46.079 6503.188 - 6553.600: 45.4152% ( 901) 00:07:46.079 6553.600 - 6604.012: 50.3785% ( 918) 00:07:46.079 6604.012 - 6654.425: 55.4390% ( 936) 00:07:46.079 6654.425 - 6704.837: 60.2941% ( 898) 00:07:46.079 6704.837 - 6755.249: 65.1763% ( 903) 00:07:46.079 6755.249 - 6805.662: 70.0314% ( 898) 00:07:46.079 6805.662 - 6856.074: 74.8486% ( 891) 00:07:46.079 6856.074 - 6906.486: 79.6929% ( 896) 00:07:46.079 6906.486 - 6956.898: 84.3642% ( 864) 00:07:46.079 6956.898 - 7007.311: 87.9217% ( 658) 00:07:46.079 7007.311 - 7057.723: 89.8627% ( 359) 00:07:46.079 7057.723 - 7108.135: 90.8629% ( 185) 00:07:46.079 7108.135 - 7158.548: 91.3927% ( 98) 00:07:46.079 7158.548 - 7208.960: 91.7874% ( 73) 00:07:46.079 7208.960 - 7259.372: 92.1226% ( 62) 00:07:46.079 7259.372 - 7309.785: 92.4470% ( 60) 00:07:46.079 7309.785 - 7360.197: 92.7822% ( 62) 00:07:46.079 7360.197 - 7410.609: 92.9877% ( 38) 00:07:46.079 7410.609 - 7461.022: 93.0796% ( 17) 00:07:46.079 7461.022 - 7511.434: 93.1715% ( 17) 00:07:46.079 7511.434 - 7561.846: 93.2634% ( 17) 00:07:46.079 7561.846 - 7612.258: 93.3445% ( 15) 00:07:46.080 7612.258 - 7662.671: 93.4526% ( 20) 00:07:46.080 7662.671 - 7713.083: 93.5608% ( 20) 00:07:46.080 7713.083 - 7763.495: 93.6473% ( 16) 00:07:46.080 7763.495 - 7813.908: 93.7284% ( 15) 00:07:46.080 7813.908 - 7864.320: 93.8041% ( 14) 00:07:46.080 7864.320 - 7914.732: 93.8960% ( 17) 00:07:46.080 7914.732 - 7965.145: 93.9987% ( 19) 00:07:46.080 7965.145 - 8015.557: 94.0960% ( 18) 00:07:46.080 8015.557 - 8065.969: 94.1825% ( 16) 00:07:46.080 8065.969 - 8116.382: 94.2582% ( 14) 00:07:46.080 8116.382 - 8166.794: 94.3718% ( 21) 00:07:46.080 8166.794 - 8217.206: 94.5015% ( 24) 00:07:46.080 8217.206 - 8267.618: 94.5718% ( 13) 00:07:46.080 8267.618 - 8318.031: 94.6367% ( 12) 00:07:46.080 8318.031 - 8368.443: 94.7016% ( 12) 00:07:46.080 8368.443 - 8418.855: 94.7827% ( 15) 00:07:46.080 8418.855 - 8469.268: 94.8692% ( 16) 00:07:46.080 8469.268 - 8519.680: 94.9827% ( 21) 00:07:46.080 8519.680 - 8570.092: 95.0800% ( 18) 00:07:46.080 8570.092 - 8620.505: 95.1936% ( 21) 00:07:46.080 8620.505 - 8670.917: 95.3017% ( 20) 00:07:46.080 8670.917 - 8721.329: 95.4152% ( 21) 00:07:46.080 8721.329 - 8771.742: 95.5828% ( 31) 00:07:46.080 8771.742 - 8822.154: 95.7288% ( 27) 00:07:46.080 8822.154 - 8872.566: 95.8261% ( 18) 00:07:46.080 8872.566 - 8922.978: 95.9234% ( 18) 00:07:46.080 8922.978 - 8973.391: 96.0316% ( 20) 00:07:46.080 8973.391 - 9023.803: 96.1235% ( 17) 00:07:46.080 9023.803 - 9074.215: 96.2046% ( 15) 00:07:46.080 9074.215 - 9124.628: 96.2695% ( 12) 00:07:46.080 9124.628 - 9175.040: 96.3560% ( 16) 00:07:46.080 9175.040 - 9225.452: 96.4317% ( 14) 00:07:46.080 9225.452 - 9275.865: 96.5182% ( 16) 00:07:46.080 9275.865 - 9326.277: 96.5939% ( 14) 00:07:46.080 9326.277 - 9376.689: 96.6750% ( 15) 00:07:46.080 9376.689 - 9427.102: 96.7236% ( 9) 00:07:46.080 9427.102 - 9477.514: 96.7993% ( 14) 00:07:46.080 9477.514 - 9527.926: 96.8534% ( 10) 00:07:46.080 9527.926 - 9578.338: 96.8966% ( 8) 00:07:46.080 9578.338 - 9628.751: 96.9291% ( 6) 00:07:46.080 9628.751 - 9679.163: 96.9615% ( 6) 00:07:46.080 9679.163 - 9729.575: 96.9939% ( 6) 00:07:46.080 9729.575 - 9779.988: 97.0318% ( 7) 00:07:46.080 9779.988 - 9830.400: 97.0642% ( 6) 00:07:46.080 9830.400 - 9880.812: 97.1021% ( 7) 00:07:46.080 9880.812 - 9931.225: 97.1399% ( 7) 00:07:46.080 9931.225 - 9981.637: 97.1886% ( 9) 00:07:46.080 9981.637 - 10032.049: 97.2210% ( 6) 00:07:46.080 10032.049 - 10082.462: 97.2481% ( 5) 00:07:46.080 10082.462 - 10132.874: 97.2697% ( 4) 00:07:46.080 10132.874 - 10183.286: 97.2859% ( 3) 00:07:46.080 10183.286 - 10233.698: 97.2913% ( 1) 00:07:46.080 10233.698 - 10284.111: 97.3021% ( 2) 00:07:46.080 10284.111 - 10334.523: 97.3129% ( 2) 00:07:46.080 10334.523 - 10384.935: 97.3400% ( 5) 00:07:46.080 10384.935 - 10435.348: 97.3778% ( 7) 00:07:46.080 10435.348 - 10485.760: 97.4211% ( 8) 00:07:46.080 10485.760 - 10536.172: 97.4535% ( 6) 00:07:46.080 10536.172 - 10586.585: 97.4913% ( 7) 00:07:46.080 10586.585 - 10636.997: 97.5292% ( 7) 00:07:46.080 10636.997 - 10687.409: 97.5670% ( 7) 00:07:46.080 10687.409 - 10737.822: 97.6049% ( 7) 00:07:46.080 10737.822 - 10788.234: 97.6590% ( 10) 00:07:46.080 10788.234 - 10838.646: 97.7130% ( 10) 00:07:46.080 10838.646 - 10889.058: 97.7617% ( 9) 00:07:46.080 10889.058 - 10939.471: 97.8157% ( 10) 00:07:46.080 10939.471 - 10989.883: 97.8698% ( 10) 00:07:46.080 10989.883 - 11040.295: 97.9293% ( 11) 00:07:46.080 11040.295 - 11090.708: 97.9725% ( 8) 00:07:46.080 11090.708 - 11141.120: 98.0104% ( 7) 00:07:46.080 11141.120 - 11191.532: 98.0428% ( 6) 00:07:46.080 11191.532 - 11241.945: 98.0915% ( 9) 00:07:46.080 11241.945 - 11292.357: 98.1347% ( 8) 00:07:46.080 11292.357 - 11342.769: 98.1672% ( 6) 00:07:46.080 11342.769 - 11393.182: 98.1996% ( 6) 00:07:46.080 11393.182 - 11443.594: 98.2375% ( 7) 00:07:46.080 11443.594 - 11494.006: 98.2699% ( 6) 00:07:46.080 11494.006 - 11544.418: 98.3131% ( 8) 00:07:46.080 11544.418 - 11594.831: 98.3456% ( 6) 00:07:46.080 11594.831 - 11645.243: 98.3834% ( 7) 00:07:46.080 11645.243 - 11695.655: 98.4105% ( 5) 00:07:46.080 11695.655 - 11746.068: 98.4375% ( 5) 00:07:46.080 11746.068 - 11796.480: 98.4699% ( 6) 00:07:46.080 11796.480 - 11846.892: 98.4970% ( 5) 00:07:46.080 11846.892 - 11897.305: 98.5186% ( 4) 00:07:46.080 11897.305 - 11947.717: 98.5348% ( 3) 00:07:46.080 11947.717 - 11998.129: 98.5564% ( 4) 00:07:46.080 11998.129 - 12048.542: 98.5727% ( 3) 00:07:46.080 12048.542 - 12098.954: 98.5889% ( 3) 00:07:46.080 12098.954 - 12149.366: 98.6159% ( 5) 00:07:46.080 12149.366 - 12199.778: 98.6429% ( 5) 00:07:46.080 12199.778 - 12250.191: 98.6592% ( 3) 00:07:46.080 12250.191 - 12300.603: 98.6754% ( 3) 00:07:46.080 12300.603 - 12351.015: 98.7024% ( 5) 00:07:46.080 12351.015 - 12401.428: 98.7295% ( 5) 00:07:46.080 12401.428 - 12451.840: 98.7511% ( 4) 00:07:46.080 12451.840 - 12502.252: 98.7781% ( 5) 00:07:46.080 12502.252 - 12552.665: 98.8106% ( 6) 00:07:46.080 12552.665 - 12603.077: 98.8538% ( 8) 00:07:46.080 12603.077 - 12653.489: 98.8808% ( 5) 00:07:46.080 12653.489 - 12703.902: 98.9133% ( 6) 00:07:46.080 12703.902 - 12754.314: 98.9457% ( 6) 00:07:46.080 12754.314 - 12804.726: 98.9782% ( 6) 00:07:46.080 12804.726 - 12855.138: 99.0106% ( 6) 00:07:46.080 12855.138 - 12905.551: 99.0430% ( 6) 00:07:46.080 12905.551 - 13006.375: 99.1025% ( 11) 00:07:46.080 13006.375 - 13107.200: 99.1728% ( 13) 00:07:46.080 13107.200 - 13208.025: 99.2377% ( 12) 00:07:46.080 13208.025 - 13308.849: 99.2701% ( 6) 00:07:46.080 13308.849 - 13409.674: 99.3026% ( 6) 00:07:46.080 13409.674 - 13510.498: 99.3080% ( 1) 00:07:46.080 26012.751 - 26214.400: 99.3350% ( 5) 00:07:46.080 26214.400 - 26416.049: 99.3782% ( 8) 00:07:46.080 26416.049 - 26617.698: 99.4161% ( 7) 00:07:46.080 26617.698 - 26819.348: 99.4485% ( 6) 00:07:46.080 26819.348 - 27020.997: 99.4864% ( 7) 00:07:46.080 27020.997 - 27222.646: 99.5242% ( 7) 00:07:46.080 27222.646 - 27424.295: 99.5675% ( 8) 00:07:46.080 27424.295 - 27625.945: 99.6053% ( 7) 00:07:46.080 27625.945 - 27827.594: 99.6486% ( 8) 00:07:46.080 27827.594 - 28029.243: 99.6540% ( 1) 00:07:46.080 30852.332 - 31053.982: 99.6918% ( 7) 00:07:46.080 31053.982 - 31255.631: 99.7297% ( 7) 00:07:46.080 31255.631 - 31457.280: 99.7675% ( 7) 00:07:46.080 31457.280 - 31658.929: 99.8108% ( 8) 00:07:46.080 31658.929 - 31860.578: 99.8540% ( 8) 00:07:46.080 31860.578 - 32062.228: 99.8919% ( 7) 00:07:46.080 32062.228 - 32263.877: 99.9351% ( 8) 00:07:46.080 32263.877 - 32465.526: 99.9784% ( 8) 00:07:46.080 32465.526 - 32667.175: 100.0000% ( 4) 00:07:46.080 00:07:46.080 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:46.080 ============================================================================== 00:07:46.080 Range in us Cumulative IO count 00:07:46.080 5973.858 - 5999.065: 0.0108% ( 2) 00:07:46.080 5999.065 - 6024.271: 0.0703% ( 11) 00:07:46.080 6024.271 - 6049.477: 0.3947% ( 60) 00:07:46.080 6049.477 - 6074.683: 1.0813% ( 127) 00:07:46.080 6074.683 - 6099.889: 2.1464% ( 197) 00:07:46.080 6099.889 - 6125.095: 3.9792% ( 339) 00:07:46.080 6125.095 - 6150.302: 6.3095% ( 431) 00:07:46.080 6150.302 - 6175.508: 8.7208% ( 446) 00:07:46.080 6175.508 - 6200.714: 11.2673% ( 471) 00:07:46.080 6200.714 - 6225.920: 13.5597% ( 424) 00:07:46.080 6225.920 - 6251.126: 15.9061% ( 434) 00:07:46.080 6251.126 - 6276.332: 18.3121% ( 445) 00:07:46.080 6276.332 - 6301.538: 20.6045% ( 424) 00:07:46.080 6301.538 - 6326.745: 22.8806% ( 421) 00:07:46.080 6326.745 - 6351.951: 25.2541% ( 439) 00:07:46.080 6351.951 - 6377.157: 27.6817% ( 449) 00:07:46.080 6377.157 - 6402.363: 30.1362% ( 454) 00:07:46.080 6402.363 - 6427.569: 32.6341% ( 462) 00:07:46.080 6427.569 - 6452.775: 35.1265% ( 461) 00:07:46.080 6452.775 - 6503.188: 40.2736% ( 952) 00:07:46.080 6503.188 - 6553.600: 45.2909% ( 928) 00:07:46.080 6553.600 - 6604.012: 50.3082% ( 928) 00:07:46.080 6604.012 - 6654.425: 55.2876% ( 921) 00:07:46.080 6654.425 - 6704.837: 60.2184% ( 912) 00:07:46.080 6704.837 - 6755.249: 65.1546% ( 913) 00:07:46.080 6755.249 - 6805.662: 70.0962% ( 914) 00:07:46.080 6805.662 - 6856.074: 75.0811% ( 922) 00:07:46.080 6856.074 - 6906.486: 80.0768% ( 924) 00:07:46.080 6906.486 - 6956.898: 84.7967% ( 873) 00:07:46.080 6956.898 - 7007.311: 88.4083% ( 668) 00:07:46.080 7007.311 - 7057.723: 90.3114% ( 352) 00:07:46.080 7057.723 - 7108.135: 91.2251% ( 169) 00:07:46.080 7108.135 - 7158.548: 91.7279% ( 93) 00:07:46.080 7158.548 - 7208.960: 92.0469% ( 59) 00:07:46.080 7208.960 - 7259.372: 92.3605% ( 58) 00:07:46.080 7259.372 - 7309.785: 92.6254% ( 49) 00:07:46.080 7309.785 - 7360.197: 92.8201% ( 36) 00:07:46.080 7360.197 - 7410.609: 92.9660% ( 27) 00:07:46.080 7410.609 - 7461.022: 93.0796% ( 21) 00:07:46.080 7461.022 - 7511.434: 93.1715% ( 17) 00:07:46.080 7511.434 - 7561.846: 93.2742% ( 19) 00:07:46.080 7561.846 - 7612.258: 93.3661% ( 17) 00:07:46.080 7612.258 - 7662.671: 93.4526% ( 16) 00:07:46.080 7662.671 - 7713.083: 93.5337% ( 15) 00:07:46.080 7713.083 - 7763.495: 93.6202% ( 16) 00:07:46.080 7763.495 - 7813.908: 93.7067% ( 16) 00:07:46.080 7813.908 - 7864.320: 93.7716% ( 12) 00:07:46.080 7864.320 - 7914.732: 93.8527% ( 15) 00:07:46.080 7914.732 - 7965.145: 93.9284% ( 14) 00:07:46.080 7965.145 - 8015.557: 93.9879% ( 11) 00:07:46.081 8015.557 - 8065.969: 94.0690% ( 15) 00:07:46.081 8065.969 - 8116.382: 94.1339% ( 12) 00:07:46.081 8116.382 - 8166.794: 94.2096% ( 14) 00:07:46.081 8166.794 - 8217.206: 94.2744% ( 12) 00:07:46.081 8217.206 - 8267.618: 94.3609% ( 16) 00:07:46.081 8267.618 - 8318.031: 94.4420% ( 15) 00:07:46.081 8318.031 - 8368.443: 94.5285% ( 16) 00:07:46.081 8368.443 - 8418.855: 94.6259% ( 18) 00:07:46.081 8418.855 - 8469.268: 94.7394% ( 21) 00:07:46.081 8469.268 - 8519.680: 94.8475% ( 20) 00:07:46.081 8519.680 - 8570.092: 94.9449% ( 18) 00:07:46.081 8570.092 - 8620.505: 95.0692% ( 23) 00:07:46.081 8620.505 - 8670.917: 95.1719% ( 19) 00:07:46.081 8670.917 - 8721.329: 95.2747% ( 19) 00:07:46.081 8721.329 - 8771.742: 95.3936% ( 22) 00:07:46.081 8771.742 - 8822.154: 95.5396% ( 27) 00:07:46.081 8822.154 - 8872.566: 95.6801% ( 26) 00:07:46.081 8872.566 - 8922.978: 95.8261% ( 27) 00:07:46.081 8922.978 - 8973.391: 95.9829% ( 29) 00:07:46.081 8973.391 - 9023.803: 96.1019% ( 22) 00:07:46.081 9023.803 - 9074.215: 96.2370% ( 25) 00:07:46.081 9074.215 - 9124.628: 96.3397% ( 19) 00:07:46.081 9124.628 - 9175.040: 96.4317% ( 17) 00:07:46.081 9175.040 - 9225.452: 96.4911% ( 11) 00:07:46.081 9225.452 - 9275.865: 96.5506% ( 11) 00:07:46.081 9275.865 - 9326.277: 96.5993% ( 9) 00:07:46.081 9326.277 - 9376.689: 96.6317% ( 6) 00:07:46.081 9376.689 - 9427.102: 96.6641% ( 6) 00:07:46.081 9427.102 - 9477.514: 96.7128% ( 9) 00:07:46.081 9477.514 - 9527.926: 96.7723% ( 11) 00:07:46.081 9527.926 - 9578.338: 96.8317% ( 11) 00:07:46.081 9578.338 - 9628.751: 96.8858% ( 10) 00:07:46.081 9628.751 - 9679.163: 96.9399% ( 10) 00:07:46.081 9679.163 - 9729.575: 96.9669% ( 5) 00:07:46.081 9729.575 - 9779.988: 97.0048% ( 7) 00:07:46.081 9779.988 - 9830.400: 97.0318% ( 5) 00:07:46.081 9830.400 - 9880.812: 97.0480% ( 3) 00:07:46.081 9880.812 - 9931.225: 97.0696% ( 4) 00:07:46.081 9931.225 - 9981.637: 97.0859% ( 3) 00:07:46.081 9981.637 - 10032.049: 97.1075% ( 4) 00:07:46.081 10032.049 - 10082.462: 97.1237% ( 3) 00:07:46.081 10082.462 - 10132.874: 97.1507% ( 5) 00:07:46.081 10132.874 - 10183.286: 97.1832% ( 6) 00:07:46.081 10183.286 - 10233.698: 97.2318% ( 9) 00:07:46.081 10233.698 - 10284.111: 97.2805% ( 9) 00:07:46.081 10284.111 - 10334.523: 97.3400% ( 11) 00:07:46.081 10334.523 - 10384.935: 97.3886% ( 9) 00:07:46.081 10384.935 - 10435.348: 97.4211% ( 6) 00:07:46.081 10435.348 - 10485.760: 97.4643% ( 8) 00:07:46.081 10485.760 - 10536.172: 97.5022% ( 7) 00:07:46.081 10536.172 - 10586.585: 97.5616% ( 11) 00:07:46.081 10586.585 - 10636.997: 97.6157% ( 10) 00:07:46.081 10636.997 - 10687.409: 97.6752% ( 11) 00:07:46.081 10687.409 - 10737.822: 97.7292% ( 10) 00:07:46.081 10737.822 - 10788.234: 97.7779% ( 9) 00:07:46.081 10788.234 - 10838.646: 97.8320% ( 10) 00:07:46.081 10838.646 - 10889.058: 97.8860% ( 10) 00:07:46.081 10889.058 - 10939.471: 97.9455% ( 11) 00:07:46.081 10939.471 - 10989.883: 97.9888% ( 8) 00:07:46.081 10989.883 - 11040.295: 98.0482% ( 11) 00:07:46.081 11040.295 - 11090.708: 98.0861% ( 7) 00:07:46.081 11090.708 - 11141.120: 98.1293% ( 8) 00:07:46.081 11141.120 - 11191.532: 98.1510% ( 4) 00:07:46.081 11191.532 - 11241.945: 98.1672% ( 3) 00:07:46.081 11241.945 - 11292.357: 98.1834% ( 3) 00:07:46.081 11292.357 - 11342.769: 98.1996% ( 3) 00:07:46.081 11342.769 - 11393.182: 98.2158% ( 3) 00:07:46.081 11393.182 - 11443.594: 98.2375% ( 4) 00:07:46.081 11443.594 - 11494.006: 98.2537% ( 3) 00:07:46.081 11494.006 - 11544.418: 98.2699% ( 3) 00:07:46.081 11746.068 - 11796.480: 98.2753% ( 1) 00:07:46.081 11796.480 - 11846.892: 98.2915% ( 3) 00:07:46.081 11846.892 - 11897.305: 98.3294% ( 7) 00:07:46.081 11897.305 - 11947.717: 98.3726% ( 8) 00:07:46.081 11947.717 - 11998.129: 98.4051% ( 6) 00:07:46.081 11998.129 - 12048.542: 98.4375% ( 6) 00:07:46.081 12048.542 - 12098.954: 98.4699% ( 6) 00:07:46.081 12098.954 - 12149.366: 98.5132% ( 8) 00:07:46.081 12149.366 - 12199.778: 98.5456% ( 6) 00:07:46.081 12199.778 - 12250.191: 98.5835% ( 7) 00:07:46.081 12250.191 - 12300.603: 98.6105% ( 5) 00:07:46.081 12300.603 - 12351.015: 98.6484% ( 7) 00:07:46.081 12351.015 - 12401.428: 98.6970% ( 9) 00:07:46.081 12401.428 - 12451.840: 98.7457% ( 9) 00:07:46.081 12451.840 - 12502.252: 98.7997% ( 10) 00:07:46.081 12502.252 - 12552.665: 98.8538% ( 10) 00:07:46.081 12552.665 - 12603.077: 98.9025% ( 9) 00:07:46.081 12603.077 - 12653.489: 98.9565% ( 10) 00:07:46.081 12653.489 - 12703.902: 99.0052% ( 9) 00:07:46.081 12703.902 - 12754.314: 99.0538% ( 9) 00:07:46.081 12754.314 - 12804.726: 99.1025% ( 9) 00:07:46.081 12804.726 - 12855.138: 99.1295% ( 5) 00:07:46.081 12855.138 - 12905.551: 99.1458% ( 3) 00:07:46.081 12905.551 - 13006.375: 99.1782% ( 6) 00:07:46.081 13006.375 - 13107.200: 99.2052% ( 5) 00:07:46.081 13107.200 - 13208.025: 99.2431% ( 7) 00:07:46.081 13208.025 - 13308.849: 99.2755% ( 6) 00:07:46.081 13308.849 - 13409.674: 99.3080% ( 6) 00:07:46.081 24097.083 - 24197.908: 99.3242% ( 3) 00:07:46.081 24197.908 - 24298.732: 99.3458% ( 4) 00:07:46.081 24298.732 - 24399.557: 99.3674% ( 4) 00:07:46.081 24399.557 - 24500.382: 99.3837% ( 3) 00:07:46.081 24500.382 - 24601.206: 99.4053% ( 4) 00:07:46.081 24601.206 - 24702.031: 99.4269% ( 4) 00:07:46.081 24702.031 - 24802.855: 99.4485% ( 4) 00:07:46.081 24802.855 - 24903.680: 99.4702% ( 4) 00:07:46.081 24903.680 - 25004.505: 99.4918% ( 4) 00:07:46.081 25004.505 - 25105.329: 99.5080% ( 3) 00:07:46.081 25105.329 - 25206.154: 99.5296% ( 4) 00:07:46.081 25206.154 - 25306.978: 99.5513% ( 4) 00:07:46.081 25306.978 - 25407.803: 99.5729% ( 4) 00:07:46.081 25407.803 - 25508.628: 99.5891% ( 3) 00:07:46.081 25508.628 - 25609.452: 99.6107% ( 4) 00:07:46.081 25609.452 - 25710.277: 99.6324% ( 4) 00:07:46.081 25710.277 - 25811.102: 99.6540% ( 4) 00:07:46.081 28634.191 - 28835.840: 99.6594% ( 1) 00:07:46.081 28835.840 - 29037.489: 99.6972% ( 7) 00:07:46.081 29037.489 - 29239.138: 99.7351% ( 7) 00:07:46.081 29239.138 - 29440.788: 99.7783% ( 8) 00:07:46.081 29440.788 - 29642.437: 99.8216% ( 8) 00:07:46.081 29642.437 - 29844.086: 99.8594% ( 7) 00:07:46.081 29844.086 - 30045.735: 99.8973% ( 7) 00:07:46.081 30045.735 - 30247.385: 99.9405% ( 8) 00:07:46.081 30247.385 - 30449.034: 99.9838% ( 8) 00:07:46.081 30449.034 - 30650.683: 100.0000% ( 3) 00:07:46.081 00:07:46.081 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:46.081 ============================================================================== 00:07:46.081 Range in us Cumulative IO count 00:07:46.081 5973.858 - 5999.065: 0.0162% ( 3) 00:07:46.081 5999.065 - 6024.271: 0.0757% ( 11) 00:07:46.081 6024.271 - 6049.477: 0.2379% ( 30) 00:07:46.081 6049.477 - 6074.683: 0.9353% ( 129) 00:07:46.081 6074.683 - 6099.889: 2.1356% ( 222) 00:07:46.081 6099.889 - 6125.095: 4.1739% ( 377) 00:07:46.081 6125.095 - 6150.302: 6.5095% ( 432) 00:07:46.081 6150.302 - 6175.508: 8.7803% ( 420) 00:07:46.081 6175.508 - 6200.714: 11.3268% ( 471) 00:07:46.081 6200.714 - 6225.920: 13.6949% ( 438) 00:07:46.081 6225.920 - 6251.126: 15.9656% ( 420) 00:07:46.081 6251.126 - 6276.332: 18.3553% ( 442) 00:07:46.081 6276.332 - 6301.538: 20.6153% ( 418) 00:07:46.081 6301.538 - 6326.745: 23.0482% ( 450) 00:07:46.081 6326.745 - 6351.951: 25.4271% ( 440) 00:07:46.081 6351.951 - 6377.157: 27.7682% ( 433) 00:07:46.081 6377.157 - 6402.363: 30.1308% ( 437) 00:07:46.081 6402.363 - 6427.569: 32.5205% ( 442) 00:07:46.081 6427.569 - 6452.775: 34.9048% ( 441) 00:07:46.081 6452.775 - 6503.188: 39.8843% ( 921) 00:07:46.081 6503.188 - 6553.600: 45.0368% ( 953) 00:07:46.081 6553.600 - 6604.012: 50.0919% ( 935) 00:07:46.081 6604.012 - 6654.425: 54.9957% ( 907) 00:07:46.081 6654.425 - 6704.837: 59.9697% ( 920) 00:07:46.081 6704.837 - 6755.249: 64.8735% ( 907) 00:07:46.081 6755.249 - 6805.662: 69.7610% ( 904) 00:07:46.081 6805.662 - 6856.074: 74.7135% ( 916) 00:07:46.081 6856.074 - 6906.486: 79.7091% ( 924) 00:07:46.081 6906.486 - 6956.898: 84.4399% ( 875) 00:07:46.081 6956.898 - 7007.311: 88.0731% ( 672) 00:07:46.081 7007.311 - 7057.723: 90.0465% ( 365) 00:07:46.081 7057.723 - 7108.135: 90.9278% ( 163) 00:07:46.081 7108.135 - 7158.548: 91.4846% ( 103) 00:07:46.081 7158.548 - 7208.960: 91.8090% ( 60) 00:07:46.081 7208.960 - 7259.372: 92.1388% ( 61) 00:07:46.081 7259.372 - 7309.785: 92.4200% ( 52) 00:07:46.081 7309.785 - 7360.197: 92.7065% ( 53) 00:07:46.081 7360.197 - 7410.609: 92.8633% ( 29) 00:07:46.081 7410.609 - 7461.022: 92.9823% ( 22) 00:07:46.081 7461.022 - 7511.434: 93.0904% ( 20) 00:07:46.081 7511.434 - 7561.846: 93.1931% ( 19) 00:07:46.081 7561.846 - 7612.258: 93.2958% ( 19) 00:07:46.081 7612.258 - 7662.671: 93.3769% ( 15) 00:07:46.081 7662.671 - 7713.083: 93.4689% ( 17) 00:07:46.081 7713.083 - 7763.495: 93.5554% ( 16) 00:07:46.081 7763.495 - 7813.908: 93.6581% ( 19) 00:07:46.081 7813.908 - 7864.320: 93.7446% ( 16) 00:07:46.081 7864.320 - 7914.732: 93.8419% ( 18) 00:07:46.081 7914.732 - 7965.145: 93.9338% ( 17) 00:07:46.081 7965.145 - 8015.557: 94.0365% ( 19) 00:07:46.081 8015.557 - 8065.969: 94.1339% ( 18) 00:07:46.081 8065.969 - 8116.382: 94.2420% ( 20) 00:07:46.081 8116.382 - 8166.794: 94.3285% ( 16) 00:07:46.081 8166.794 - 8217.206: 94.3934% ( 12) 00:07:46.081 8217.206 - 8267.618: 94.4474% ( 10) 00:07:46.081 8267.618 - 8318.031: 94.5069% ( 11) 00:07:46.082 8318.031 - 8368.443: 94.5880% ( 15) 00:07:46.082 8368.443 - 8418.855: 94.6529% ( 12) 00:07:46.082 8418.855 - 8469.268: 94.7340% ( 15) 00:07:46.082 8469.268 - 8519.680: 94.8205% ( 16) 00:07:46.082 8519.680 - 8570.092: 94.9124% ( 17) 00:07:46.082 8570.092 - 8620.505: 95.0151% ( 19) 00:07:46.082 8620.505 - 8670.917: 95.1449% ( 24) 00:07:46.082 8670.917 - 8721.329: 95.2314% ( 16) 00:07:46.082 8721.329 - 8771.742: 95.3287% ( 18) 00:07:46.082 8771.742 - 8822.154: 95.4260% ( 18) 00:07:46.082 8822.154 - 8872.566: 95.5450% ( 22) 00:07:46.082 8872.566 - 8922.978: 95.6856% ( 26) 00:07:46.082 8922.978 - 8973.391: 95.7991% ( 21) 00:07:46.082 8973.391 - 9023.803: 95.9018% ( 19) 00:07:46.082 9023.803 - 9074.215: 96.0045% ( 19) 00:07:46.082 9074.215 - 9124.628: 96.1073% ( 19) 00:07:46.082 9124.628 - 9175.040: 96.2154% ( 20) 00:07:46.082 9175.040 - 9225.452: 96.3235% ( 20) 00:07:46.082 9225.452 - 9275.865: 96.4046% ( 15) 00:07:46.082 9275.865 - 9326.277: 96.4911% ( 16) 00:07:46.082 9326.277 - 9376.689: 96.5776% ( 16) 00:07:46.082 9376.689 - 9427.102: 96.6425% ( 12) 00:07:46.082 9427.102 - 9477.514: 96.7128% ( 13) 00:07:46.082 9477.514 - 9527.926: 96.7561% ( 8) 00:07:46.082 9527.926 - 9578.338: 96.7939% ( 7) 00:07:46.082 9578.338 - 9628.751: 96.8263% ( 6) 00:07:46.082 9628.751 - 9679.163: 96.8642% ( 7) 00:07:46.082 9679.163 - 9729.575: 96.8966% ( 6) 00:07:46.082 9729.575 - 9779.988: 96.9237% ( 5) 00:07:46.082 9779.988 - 9830.400: 96.9453% ( 4) 00:07:46.082 9830.400 - 9880.812: 96.9885% ( 8) 00:07:46.082 9880.812 - 9931.225: 97.0372% ( 9) 00:07:46.082 9931.225 - 9981.637: 97.1129% ( 14) 00:07:46.082 9981.637 - 10032.049: 97.1832% ( 13) 00:07:46.082 10032.049 - 10082.462: 97.2589% ( 14) 00:07:46.082 10082.462 - 10132.874: 97.3346% ( 14) 00:07:46.082 10132.874 - 10183.286: 97.4048% ( 13) 00:07:46.082 10183.286 - 10233.698: 97.4805% ( 14) 00:07:46.082 10233.698 - 10284.111: 97.5508% ( 13) 00:07:46.082 10284.111 - 10334.523: 97.6211% ( 13) 00:07:46.082 10334.523 - 10384.935: 97.6914% ( 13) 00:07:46.082 10384.935 - 10435.348: 97.7671% ( 14) 00:07:46.082 10435.348 - 10485.760: 97.8374% ( 13) 00:07:46.082 10485.760 - 10536.172: 97.9077% ( 13) 00:07:46.082 10536.172 - 10586.585: 97.9833% ( 14) 00:07:46.082 10586.585 - 10636.997: 98.0428% ( 11) 00:07:46.082 10636.997 - 10687.409: 98.0969% ( 10) 00:07:46.082 10687.409 - 10737.822: 98.1455% ( 9) 00:07:46.082 10737.822 - 10788.234: 98.1996% ( 10) 00:07:46.082 10788.234 - 10838.646: 98.2266% ( 5) 00:07:46.082 10838.646 - 10889.058: 98.2429% ( 3) 00:07:46.082 10889.058 - 10939.471: 98.2591% ( 3) 00:07:46.082 10939.471 - 10989.883: 98.2699% ( 2) 00:07:46.082 11645.243 - 11695.655: 98.2861% ( 3) 00:07:46.082 11695.655 - 11746.068: 98.3023% ( 3) 00:07:46.082 11746.068 - 11796.480: 98.3240% ( 4) 00:07:46.082 11796.480 - 11846.892: 98.3402% ( 3) 00:07:46.082 11846.892 - 11897.305: 98.3564% ( 3) 00:07:46.082 11897.305 - 11947.717: 98.3726% ( 3) 00:07:46.082 11947.717 - 11998.129: 98.3888% ( 3) 00:07:46.082 11998.129 - 12048.542: 98.4051% ( 3) 00:07:46.082 12048.542 - 12098.954: 98.4213% ( 3) 00:07:46.082 12098.954 - 12149.366: 98.4429% ( 4) 00:07:46.082 12149.366 - 12199.778: 98.4591% ( 3) 00:07:46.082 12199.778 - 12250.191: 98.4916% ( 6) 00:07:46.082 12250.191 - 12300.603: 98.5240% ( 6) 00:07:46.082 12300.603 - 12351.015: 98.5619% ( 7) 00:07:46.082 12351.015 - 12401.428: 98.5943% ( 6) 00:07:46.082 12401.428 - 12451.840: 98.6321% ( 7) 00:07:46.082 12451.840 - 12502.252: 98.6646% ( 6) 00:07:46.082 12502.252 - 12552.665: 98.7024% ( 7) 00:07:46.082 12552.665 - 12603.077: 98.7511% ( 9) 00:07:46.082 12603.077 - 12653.489: 98.7997% ( 9) 00:07:46.082 12653.489 - 12703.902: 98.8376% ( 7) 00:07:46.082 12703.902 - 12754.314: 98.8700% ( 6) 00:07:46.082 12754.314 - 12804.726: 98.9079% ( 7) 00:07:46.082 12804.726 - 12855.138: 98.9457% ( 7) 00:07:46.082 12855.138 - 12905.551: 98.9782% ( 6) 00:07:46.082 12905.551 - 13006.375: 99.0484% ( 13) 00:07:46.082 13006.375 - 13107.200: 99.1133% ( 12) 00:07:46.082 13107.200 - 13208.025: 99.1674% ( 10) 00:07:46.082 13208.025 - 13308.849: 99.1998% ( 6) 00:07:46.082 13308.849 - 13409.674: 99.2377% ( 7) 00:07:46.082 13409.674 - 13510.498: 99.2701% ( 6) 00:07:46.082 13510.498 - 13611.323: 99.3026% ( 6) 00:07:46.082 13611.323 - 13712.148: 99.3080% ( 1) 00:07:46.082 21878.942 - 21979.766: 99.3188% ( 2) 00:07:46.082 21979.766 - 22080.591: 99.3404% ( 4) 00:07:46.082 22080.591 - 22181.415: 99.3620% ( 4) 00:07:46.082 22181.415 - 22282.240: 99.3782% ( 3) 00:07:46.082 22282.240 - 22383.065: 99.3999% ( 4) 00:07:46.082 22383.065 - 22483.889: 99.4215% ( 4) 00:07:46.082 22483.889 - 22584.714: 99.4431% ( 4) 00:07:46.082 22584.714 - 22685.538: 99.4647% ( 4) 00:07:46.082 22685.538 - 22786.363: 99.4810% ( 3) 00:07:46.082 22786.363 - 22887.188: 99.4972% ( 3) 00:07:46.082 22887.188 - 22988.012: 99.5188% ( 4) 00:07:46.082 22988.012 - 23088.837: 99.5350% ( 3) 00:07:46.082 23088.837 - 23189.662: 99.5567% ( 4) 00:07:46.082 23189.662 - 23290.486: 99.5783% ( 4) 00:07:46.082 23290.486 - 23391.311: 99.5999% ( 4) 00:07:46.082 23391.311 - 23492.135: 99.6215% ( 4) 00:07:46.082 23492.135 - 23592.960: 99.6378% ( 3) 00:07:46.082 23592.960 - 23693.785: 99.6540% ( 3) 00:07:46.082 26617.698 - 26819.348: 99.6918% ( 7) 00:07:46.082 26819.348 - 27020.997: 99.7297% ( 7) 00:07:46.082 27020.997 - 27222.646: 99.7405% ( 2) 00:07:46.082 27222.646 - 27424.295: 99.7729% ( 6) 00:07:46.082 27424.295 - 27625.945: 99.8108% ( 7) 00:07:46.082 27625.945 - 27827.594: 99.8540% ( 8) 00:07:46.082 27827.594 - 28029.243: 99.8973% ( 8) 00:07:46.082 28029.243 - 28230.892: 99.9351% ( 7) 00:07:46.082 28230.892 - 28432.542: 99.9784% ( 8) 00:07:46.082 28432.542 - 28634.191: 100.0000% ( 4) 00:07:46.082 00:07:46.082 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:46.082 ============================================================================== 00:07:46.082 Range in us Cumulative IO count 00:07:46.082 5973.858 - 5999.065: 0.0054% ( 1) 00:07:46.082 5999.065 - 6024.271: 0.0593% ( 10) 00:07:46.082 6024.271 - 6049.477: 0.3125% ( 47) 00:07:46.082 6049.477 - 6074.683: 0.9213% ( 113) 00:07:46.082 6074.683 - 6099.889: 2.1444% ( 227) 00:07:46.082 6099.889 - 6125.095: 4.0787% ( 359) 00:07:46.082 6125.095 - 6150.302: 6.5948% ( 467) 00:07:46.082 6150.302 - 6175.508: 9.1164% ( 468) 00:07:46.082 6175.508 - 6200.714: 11.3901% ( 422) 00:07:46.082 6200.714 - 6225.920: 13.6315% ( 416) 00:07:46.082 6225.920 - 6251.126: 15.7004% ( 384) 00:07:46.082 6251.126 - 6276.332: 18.0927% ( 444) 00:07:46.082 6276.332 - 6301.538: 20.6088% ( 467) 00:07:46.082 6301.538 - 6326.745: 22.9526% ( 435) 00:07:46.082 6326.745 - 6351.951: 25.2909% ( 434) 00:07:46.082 6351.951 - 6377.157: 27.6347% ( 435) 00:07:46.082 6377.157 - 6402.363: 30.0485% ( 448) 00:07:46.082 6402.363 - 6427.569: 32.4623% ( 448) 00:07:46.082 6427.569 - 6452.775: 34.8545% ( 444) 00:07:46.082 6452.775 - 6503.188: 39.8761% ( 932) 00:07:46.082 6503.188 - 6553.600: 44.7953% ( 913) 00:07:46.082 6553.600 - 6604.012: 49.7144% ( 913) 00:07:46.082 6604.012 - 6654.425: 54.6444% ( 915) 00:07:46.082 6654.425 - 6704.837: 59.5312% ( 907) 00:07:46.082 6704.837 - 6755.249: 64.4019% ( 904) 00:07:46.082 6755.249 - 6805.662: 69.3373% ( 916) 00:07:46.082 6805.662 - 6856.074: 74.1810% ( 899) 00:07:46.082 6856.074 - 6906.486: 79.1541% ( 923) 00:07:46.082 6906.486 - 6956.898: 83.8470% ( 871) 00:07:46.082 6956.898 - 7007.311: 87.3869% ( 657) 00:07:46.082 7007.311 - 7057.723: 89.4289% ( 379) 00:07:46.082 7057.723 - 7108.135: 90.3556% ( 172) 00:07:46.082 7108.135 - 7158.548: 90.8782% ( 97) 00:07:46.082 7158.548 - 7208.960: 91.2554% ( 70) 00:07:46.082 7208.960 - 7259.372: 91.6110% ( 66) 00:07:46.082 7259.372 - 7309.785: 91.9181% ( 57) 00:07:46.082 7309.785 - 7360.197: 92.1929% ( 51) 00:07:46.083 7360.197 - 7410.609: 92.3599% ( 31) 00:07:46.083 7410.609 - 7461.022: 92.5108% ( 28) 00:07:46.083 7461.022 - 7511.434: 92.6670% ( 29) 00:07:46.083 7511.434 - 7561.846: 92.8071% ( 26) 00:07:46.083 7561.846 - 7612.258: 92.9688% ( 30) 00:07:46.083 7612.258 - 7662.671: 93.1196% ( 28) 00:07:46.083 7662.671 - 7713.083: 93.2812% ( 30) 00:07:46.083 7713.083 - 7763.495: 93.4375% ( 29) 00:07:46.083 7763.495 - 7813.908: 93.5668% ( 24) 00:07:46.083 7813.908 - 7864.320: 93.6961% ( 24) 00:07:46.083 7864.320 - 7914.732: 93.8362% ( 26) 00:07:46.083 7914.732 - 7965.145: 93.9601% ( 23) 00:07:46.083 7965.145 - 8015.557: 94.0894% ( 24) 00:07:46.083 8015.557 - 8065.969: 94.1918% ( 19) 00:07:46.083 8065.969 - 8116.382: 94.2942% ( 19) 00:07:46.083 8116.382 - 8166.794: 94.3858% ( 17) 00:07:46.083 8166.794 - 8217.206: 94.4881% ( 19) 00:07:46.083 8217.206 - 8267.618: 94.5744% ( 16) 00:07:46.083 8267.618 - 8318.031: 94.6606% ( 16) 00:07:46.083 8318.031 - 8368.443: 94.7845% ( 23) 00:07:46.083 8368.443 - 8418.855: 94.8976% ( 21) 00:07:46.083 8418.855 - 8469.268: 95.0054% ( 20) 00:07:46.083 8469.268 - 8519.680: 95.1078% ( 19) 00:07:46.083 8519.680 - 8570.092: 95.1994% ( 17) 00:07:46.083 8570.092 - 8620.505: 95.2694% ( 13) 00:07:46.083 8620.505 - 8670.917: 95.3394% ( 13) 00:07:46.083 8670.917 - 8721.329: 95.4095% ( 13) 00:07:46.083 8721.329 - 8771.742: 95.4741% ( 12) 00:07:46.083 8771.742 - 8822.154: 95.5442% ( 13) 00:07:46.083 8822.154 - 8872.566: 95.6304% ( 16) 00:07:46.083 8872.566 - 8922.978: 95.7220% ( 17) 00:07:46.083 8922.978 - 8973.391: 95.8028% ( 15) 00:07:46.083 8973.391 - 9023.803: 95.8944% ( 17) 00:07:46.083 9023.803 - 9074.215: 95.9752% ( 15) 00:07:46.083 9074.215 - 9124.628: 96.0614% ( 16) 00:07:46.083 9124.628 - 9175.040: 96.1800% ( 22) 00:07:46.083 9175.040 - 9225.452: 96.2554% ( 14) 00:07:46.083 9225.452 - 9275.865: 96.3362% ( 15) 00:07:46.083 9275.865 - 9326.277: 96.3955% ( 11) 00:07:46.083 9326.277 - 9376.689: 96.4440% ( 9) 00:07:46.083 9376.689 - 9427.102: 96.4925% ( 9) 00:07:46.083 9427.102 - 9477.514: 96.5463% ( 10) 00:07:46.083 9477.514 - 9527.926: 96.6110% ( 12) 00:07:46.083 9527.926 - 9578.338: 96.6972% ( 16) 00:07:46.083 9578.338 - 9628.751: 96.7726% ( 14) 00:07:46.083 9628.751 - 9679.163: 96.8588% ( 16) 00:07:46.083 9679.163 - 9729.575: 96.9612% ( 19) 00:07:46.083 9729.575 - 9779.988: 97.0420% ( 15) 00:07:46.083 9779.988 - 9830.400: 97.1121% ( 13) 00:07:46.083 9830.400 - 9880.812: 97.1767% ( 12) 00:07:46.083 9880.812 - 9931.225: 97.2575% ( 15) 00:07:46.083 9931.225 - 9981.637: 97.3222% ( 12) 00:07:46.083 9981.637 - 10032.049: 97.3976% ( 14) 00:07:46.083 10032.049 - 10082.462: 97.4731% ( 14) 00:07:46.083 10082.462 - 10132.874: 97.5377% ( 12) 00:07:46.083 10132.874 - 10183.286: 97.6024% ( 12) 00:07:46.083 10183.286 - 10233.698: 97.6886% ( 16) 00:07:46.083 10233.698 - 10284.111: 97.7586% ( 13) 00:07:46.083 10284.111 - 10334.523: 97.8233% ( 12) 00:07:46.083 10334.523 - 10384.935: 97.9041% ( 15) 00:07:46.083 10384.935 - 10435.348: 97.9634% ( 11) 00:07:46.083 10435.348 - 10485.760: 98.0280% ( 12) 00:07:46.083 10485.760 - 10536.172: 98.0657% ( 7) 00:07:46.083 10536.172 - 10586.585: 98.1034% ( 7) 00:07:46.083 10586.585 - 10636.997: 98.1358% ( 6) 00:07:46.083 10636.997 - 10687.409: 98.1681% ( 6) 00:07:46.083 10687.409 - 10737.822: 98.1843% ( 3) 00:07:46.083 10737.822 - 10788.234: 98.2004% ( 3) 00:07:46.083 10788.234 - 10838.646: 98.2166% ( 3) 00:07:46.083 10838.646 - 10889.058: 98.2381% ( 4) 00:07:46.083 10889.058 - 10939.471: 98.2543% ( 3) 00:07:46.083 10939.471 - 10989.883: 98.2759% ( 4) 00:07:46.083 11443.594 - 11494.006: 98.2866% ( 2) 00:07:46.083 11494.006 - 11544.418: 98.3028% ( 3) 00:07:46.083 11544.418 - 11594.831: 98.3190% ( 3) 00:07:46.083 11594.831 - 11645.243: 98.3351% ( 3) 00:07:46.083 11645.243 - 11695.655: 98.3513% ( 3) 00:07:46.083 11695.655 - 11746.068: 98.3675% ( 3) 00:07:46.083 11746.068 - 11796.480: 98.3836% ( 3) 00:07:46.083 11796.480 - 11846.892: 98.3998% ( 3) 00:07:46.083 11846.892 - 11897.305: 98.4159% ( 3) 00:07:46.083 11897.305 - 11947.717: 98.4375% ( 4) 00:07:46.083 11947.717 - 11998.129: 98.4537% ( 3) 00:07:46.083 11998.129 - 12048.542: 98.4698% ( 3) 00:07:46.083 12048.542 - 12098.954: 98.4860% ( 3) 00:07:46.083 12098.954 - 12149.366: 98.5022% ( 3) 00:07:46.083 12149.366 - 12199.778: 98.5183% ( 3) 00:07:46.083 12199.778 - 12250.191: 98.5345% ( 3) 00:07:46.083 12250.191 - 12300.603: 98.5668% ( 6) 00:07:46.083 12300.603 - 12351.015: 98.6045% ( 7) 00:07:46.083 12351.015 - 12401.428: 98.6369% ( 6) 00:07:46.083 12401.428 - 12451.840: 98.6853% ( 9) 00:07:46.083 12451.840 - 12502.252: 98.7392% ( 10) 00:07:46.083 12502.252 - 12552.665: 98.7769% ( 7) 00:07:46.083 12552.665 - 12603.077: 98.8093% ( 6) 00:07:46.083 12603.077 - 12653.489: 98.8416% ( 6) 00:07:46.083 12653.489 - 12703.902: 98.8739% ( 6) 00:07:46.083 12703.902 - 12754.314: 98.9116% ( 7) 00:07:46.083 12754.314 - 12804.726: 98.9440% ( 6) 00:07:46.083 12804.726 - 12855.138: 98.9817% ( 7) 00:07:46.083 12855.138 - 12905.551: 99.0194% ( 7) 00:07:46.083 12905.551 - 13006.375: 99.0894% ( 13) 00:07:46.083 13006.375 - 13107.200: 99.1595% ( 13) 00:07:46.083 13107.200 - 13208.025: 99.2241% ( 12) 00:07:46.083 13208.025 - 13308.849: 99.2565% ( 6) 00:07:46.083 13308.849 - 13409.674: 99.2888% ( 6) 00:07:46.083 13409.674 - 13510.498: 99.3103% ( 4) 00:07:46.083 16333.588 - 16434.412: 99.3211% ( 2) 00:07:46.083 16434.412 - 16535.237: 99.3373% ( 3) 00:07:46.083 16535.237 - 16636.062: 99.3534% ( 3) 00:07:46.083 16636.062 - 16736.886: 99.3750% ( 4) 00:07:46.083 16736.886 - 16837.711: 99.3966% ( 4) 00:07:46.083 16837.711 - 16938.535: 99.4127% ( 3) 00:07:46.083 16938.535 - 17039.360: 99.4343% ( 4) 00:07:46.083 17039.360 - 17140.185: 99.4558% ( 4) 00:07:46.083 17140.185 - 17241.009: 99.4774% ( 4) 00:07:46.083 17241.009 - 17341.834: 99.4989% ( 4) 00:07:46.083 17341.834 - 17442.658: 99.5205% ( 4) 00:07:46.083 17442.658 - 17543.483: 99.5420% ( 4) 00:07:46.083 17543.483 - 17644.308: 99.5582% ( 3) 00:07:46.083 17644.308 - 17745.132: 99.5797% ( 4) 00:07:46.083 17745.132 - 17845.957: 99.6013% ( 4) 00:07:46.083 17845.957 - 17946.782: 99.6228% ( 4) 00:07:46.083 17946.782 - 18047.606: 99.6390% ( 3) 00:07:46.083 18047.606 - 18148.431: 99.6552% ( 3) 00:07:46.083 21173.169 - 21273.994: 99.6659% ( 2) 00:07:46.083 21273.994 - 21374.818: 99.6875% ( 4) 00:07:46.083 21374.818 - 21475.643: 99.7091% ( 4) 00:07:46.083 21475.643 - 21576.468: 99.7252% ( 3) 00:07:46.083 21576.468 - 21677.292: 99.7468% ( 4) 00:07:46.083 21677.292 - 21778.117: 99.7683% ( 4) 00:07:46.083 21778.117 - 21878.942: 99.7899% ( 4) 00:07:46.083 21878.942 - 21979.766: 99.8114% ( 4) 00:07:46.083 21979.766 - 22080.591: 99.8330% ( 4) 00:07:46.083 22080.591 - 22181.415: 99.8491% ( 3) 00:07:46.083 22181.415 - 22282.240: 99.8707% ( 4) 00:07:46.083 22282.240 - 22383.065: 99.8922% ( 4) 00:07:46.083 22383.065 - 22483.889: 99.9138% ( 4) 00:07:46.083 22483.889 - 22584.714: 99.9300% ( 3) 00:07:46.083 22584.714 - 22685.538: 99.9515% ( 4) 00:07:46.083 22685.538 - 22786.363: 99.9731% ( 4) 00:07:46.083 22786.363 - 22887.188: 99.9946% ( 4) 00:07:46.083 22887.188 - 22988.012: 100.0000% ( 1) 00:07:46.083 00:07:46.083 06:31:37 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:47.017 Initializing NVMe Controllers 00:07:47.017 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:47.017 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:47.017 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:47.017 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:47.017 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:47.017 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:47.017 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:47.017 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:47.017 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:47.017 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:47.017 Initialization complete. Launching workers. 00:07:47.017 ======================================================== 00:07:47.017 Latency(us) 00:07:47.017 Device Information : IOPS MiB/s Average min max 00:07:47.017 PCIE (0000:00:10.0) NSID 1 from core 0: 15951.89 186.94 8034.87 5731.96 33408.41 00:07:47.017 PCIE (0000:00:11.0) NSID 1 from core 0: 15951.89 186.94 8019.69 5828.73 31161.90 00:07:47.017 PCIE (0000:00:13.0) NSID 1 from core 0: 15951.89 186.94 8004.82 5944.35 29202.63 00:07:47.017 PCIE (0000:00:12.0) NSID 1 from core 0: 15951.89 186.94 7990.09 5778.99 27168.37 00:07:47.017 PCIE (0000:00:12.0) NSID 2 from core 0: 15951.89 186.94 7975.31 6050.65 25129.10 00:07:47.017 PCIE (0000:00:12.0) NSID 3 from core 0: 16015.70 187.68 7928.86 5935.41 19514.64 00:07:47.017 ======================================================== 00:07:47.017 Total : 95775.14 1122.36 7992.23 5731.96 33408.41 00:07:47.017 00:07:47.018 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:47.018 ================================================================================= 00:07:47.018 1.00000% : 6200.714us 00:07:47.018 10.00000% : 6503.188us 00:07:47.018 25.00000% : 6654.425us 00:07:47.018 50.00000% : 7007.311us 00:07:47.018 75.00000% : 7813.908us 00:07:47.018 90.00000% : 11796.480us 00:07:47.018 95.00000% : 13107.200us 00:07:47.018 98.00000% : 14821.218us 00:07:47.018 99.00000% : 15526.991us 00:07:47.018 99.50000% : 27827.594us 00:07:47.018 99.90000% : 33070.474us 00:07:47.018 99.99000% : 33473.772us 00:07:47.018 99.99900% : 33473.772us 00:07:47.018 99.99990% : 33473.772us 00:07:47.018 99.99999% : 33473.772us 00:07:47.018 00:07:47.018 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:47.018 ================================================================================= 00:07:47.018 1.00000% : 6276.332us 00:07:47.018 10.00000% : 6604.012us 00:07:47.018 25.00000% : 6755.249us 00:07:47.018 50.00000% : 6906.486us 00:07:47.018 75.00000% : 7965.145us 00:07:47.018 90.00000% : 11746.068us 00:07:47.018 95.00000% : 13308.849us 00:07:47.018 98.00000% : 14821.218us 00:07:47.018 99.00000% : 15526.991us 00:07:47.018 99.50000% : 25609.452us 00:07:47.018 99.90000% : 30852.332us 00:07:47.018 99.99000% : 31255.631us 00:07:47.018 99.99900% : 31255.631us 00:07:47.018 99.99990% : 31255.631us 00:07:47.018 99.99999% : 31255.631us 00:07:47.018 00:07:47.018 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:47.018 ================================================================================= 00:07:47.018 1.00000% : 6276.332us 00:07:47.018 10.00000% : 6604.012us 00:07:47.018 25.00000% : 6755.249us 00:07:47.018 50.00000% : 6906.486us 00:07:47.018 75.00000% : 8015.557us 00:07:47.018 90.00000% : 11746.068us 00:07:47.018 95.00000% : 13409.674us 00:07:47.018 98.00000% : 14317.095us 00:07:47.018 99.00000% : 15829.465us 00:07:47.018 99.50000% : 24197.908us 00:07:47.018 99.90000% : 28835.840us 00:07:47.018 99.99000% : 29239.138us 00:07:47.018 99.99900% : 29239.138us 00:07:47.018 99.99990% : 29239.138us 00:07:47.018 99.99999% : 29239.138us 00:07:47.018 00:07:47.018 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:47.018 ================================================================================= 00:07:47.018 1.00000% : 6276.332us 00:07:47.018 10.00000% : 6604.012us 00:07:47.018 25.00000% : 6755.249us 00:07:47.018 50.00000% : 6906.486us 00:07:47.018 75.00000% : 7965.145us 00:07:47.018 90.00000% : 11846.892us 00:07:47.018 95.00000% : 13208.025us 00:07:47.018 98.00000% : 14417.920us 00:07:47.018 99.00000% : 16232.763us 00:07:47.018 99.50000% : 22080.591us 00:07:47.018 99.90000% : 26819.348us 00:07:47.018 99.99000% : 27222.646us 00:07:47.018 99.99900% : 27222.646us 00:07:47.018 99.99990% : 27222.646us 00:07:47.018 99.99999% : 27222.646us 00:07:47.018 00:07:47.018 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:47.018 ================================================================================= 00:07:47.018 1.00000% : 6251.126us 00:07:47.018 10.00000% : 6604.012us 00:07:47.018 25.00000% : 6755.249us 00:07:47.018 50.00000% : 6906.486us 00:07:47.018 75.00000% : 7914.732us 00:07:47.018 90.00000% : 11746.068us 00:07:47.018 95.00000% : 13107.200us 00:07:47.018 98.00000% : 14619.569us 00:07:47.018 99.00000% : 15930.289us 00:07:47.018 99.50000% : 19963.274us 00:07:47.018 99.90000% : 24702.031us 00:07:47.018 99.99000% : 25105.329us 00:07:47.018 99.99900% : 25206.154us 00:07:47.018 99.99990% : 25206.154us 00:07:47.018 99.99999% : 25206.154us 00:07:47.018 00:07:47.018 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:47.018 ================================================================================= 00:07:47.018 1.00000% : 6276.332us 00:07:47.018 10.00000% : 6604.012us 00:07:47.018 25.00000% : 6755.249us 00:07:47.018 50.00000% : 6906.486us 00:07:47.018 75.00000% : 7813.908us 00:07:47.018 90.00000% : 11746.068us 00:07:47.018 95.00000% : 13107.200us 00:07:47.018 98.00000% : 14720.394us 00:07:47.018 99.00000% : 15224.517us 00:07:47.018 99.50000% : 15526.991us 00:07:47.018 99.90000% : 19156.677us 00:07:47.018 99.99000% : 19559.975us 00:07:47.018 99.99900% : 19559.975us 00:07:47.018 99.99990% : 19559.975us 00:07:47.018 99.99999% : 19559.975us 00:07:47.018 00:07:47.018 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:47.018 ============================================================================== 00:07:47.018 Range in us Cumulative IO count 00:07:47.018 5721.797 - 5747.003: 0.0312% ( 5) 00:07:47.018 5747.003 - 5772.209: 0.0375% ( 1) 00:07:47.018 5772.209 - 5797.415: 0.0500% ( 2) 00:07:47.018 5797.415 - 5822.622: 0.0625% ( 2) 00:07:47.018 5822.622 - 5847.828: 0.0813% ( 3) 00:07:47.018 5847.828 - 5873.034: 0.0938% ( 2) 00:07:47.018 5873.034 - 5898.240: 0.1125% ( 3) 00:07:47.018 5898.240 - 5923.446: 0.1375% ( 4) 00:07:47.018 5923.446 - 5948.652: 0.1750% ( 6) 00:07:47.018 5948.652 - 5973.858: 0.2250% ( 8) 00:07:47.018 5973.858 - 5999.065: 0.2812% ( 9) 00:07:47.018 5999.065 - 6024.271: 0.3438% ( 10) 00:07:47.018 6024.271 - 6049.477: 0.3937% ( 8) 00:07:47.018 6049.477 - 6074.683: 0.4750% ( 13) 00:07:47.018 6074.683 - 6099.889: 0.5312% ( 9) 00:07:47.018 6099.889 - 6125.095: 0.6375% ( 17) 00:07:47.018 6125.095 - 6150.302: 0.8125% ( 28) 00:07:47.018 6150.302 - 6175.508: 0.9563% ( 23) 00:07:47.018 6175.508 - 6200.714: 1.1313% ( 28) 00:07:47.018 6200.714 - 6225.920: 1.3250% ( 31) 00:07:47.018 6225.920 - 6251.126: 1.5813% ( 41) 00:07:47.018 6251.126 - 6276.332: 1.8813% ( 48) 00:07:47.018 6276.332 - 6301.538: 2.3937% ( 82) 00:07:47.018 6301.538 - 6326.745: 2.8125% ( 67) 00:07:47.018 6326.745 - 6351.951: 3.3250% ( 82) 00:07:47.018 6351.951 - 6377.157: 3.8937% ( 91) 00:07:47.018 6377.157 - 6402.363: 4.7000% ( 129) 00:07:47.018 6402.363 - 6427.569: 5.7500% ( 168) 00:07:47.018 6427.569 - 6452.775: 6.9375% ( 190) 00:07:47.018 6452.775 - 6503.188: 10.2188% ( 525) 00:07:47.018 6503.188 - 6553.600: 15.7125% ( 879) 00:07:47.018 6553.600 - 6604.012: 22.5375% ( 1092) 00:07:47.018 6604.012 - 6654.425: 27.4187% ( 781) 00:07:47.018 6654.425 - 6704.837: 31.8250% ( 705) 00:07:47.018 6704.837 - 6755.249: 35.8625% ( 646) 00:07:47.018 6755.249 - 6805.662: 39.1750% ( 530) 00:07:47.018 6805.662 - 6856.074: 42.4375% ( 522) 00:07:47.018 6856.074 - 6906.486: 45.4875% ( 488) 00:07:47.018 6906.486 - 6956.898: 48.4625% ( 476) 00:07:47.018 6956.898 - 7007.311: 51.2125% ( 440) 00:07:47.018 7007.311 - 7057.723: 54.3625% ( 504) 00:07:47.018 7057.723 - 7108.135: 57.4188% ( 489) 00:07:47.018 7108.135 - 7158.548: 60.0312% ( 418) 00:07:47.018 7158.548 - 7208.960: 62.7062% ( 428) 00:07:47.018 7208.960 - 7259.372: 65.1937% ( 398) 00:07:47.018 7259.372 - 7309.785: 67.3250% ( 341) 00:07:47.018 7309.785 - 7360.197: 68.6312% ( 209) 00:07:47.018 7360.197 - 7410.609: 69.6562% ( 164) 00:07:47.018 7410.609 - 7461.022: 70.5687% ( 146) 00:07:47.018 7461.022 - 7511.434: 71.3250% ( 121) 00:07:47.018 7511.434 - 7561.846: 72.0062% ( 109) 00:07:47.018 7561.846 - 7612.258: 72.4750% ( 75) 00:07:47.018 7612.258 - 7662.671: 73.2125% ( 118) 00:07:47.018 7662.671 - 7713.083: 73.8375% ( 100) 00:07:47.018 7713.083 - 7763.495: 74.3688% ( 85) 00:07:47.018 7763.495 - 7813.908: 75.0000% ( 101) 00:07:47.018 7813.908 - 7864.320: 75.3875% ( 62) 00:07:47.018 7864.320 - 7914.732: 75.7250% ( 54) 00:07:47.018 7914.732 - 7965.145: 76.0438% ( 51) 00:07:47.018 7965.145 - 8015.557: 76.2875% ( 39) 00:07:47.018 8015.557 - 8065.969: 76.5125% ( 36) 00:07:47.018 8065.969 - 8116.382: 76.8187% ( 49) 00:07:47.018 8116.382 - 8166.794: 77.1000% ( 45) 00:07:47.018 8166.794 - 8217.206: 77.3125% ( 34) 00:07:47.018 8217.206 - 8267.618: 77.5750% ( 42) 00:07:47.018 8267.618 - 8318.031: 77.7875% ( 34) 00:07:47.018 8318.031 - 8368.443: 77.9437% ( 25) 00:07:47.018 8368.443 - 8418.855: 78.1125% ( 27) 00:07:47.018 8418.855 - 8469.268: 78.4125% ( 48) 00:07:47.018 8469.268 - 8519.680: 78.7938% ( 61) 00:07:47.018 8519.680 - 8570.092: 79.3063% ( 82) 00:07:47.018 8570.092 - 8620.505: 79.6250% ( 51) 00:07:47.018 8620.505 - 8670.917: 79.8937% ( 43) 00:07:47.018 8670.917 - 8721.329: 80.1437% ( 40) 00:07:47.018 8721.329 - 8771.742: 80.3250% ( 29) 00:07:47.018 8771.742 - 8822.154: 80.4813% ( 25) 00:07:47.018 8822.154 - 8872.566: 80.6688% ( 30) 00:07:47.018 8872.566 - 8922.978: 80.8438% ( 28) 00:07:47.018 8922.978 - 8973.391: 81.0687% ( 36) 00:07:47.018 8973.391 - 9023.803: 81.2625% ( 31) 00:07:47.018 9023.803 - 9074.215: 81.4625% ( 32) 00:07:47.018 9074.215 - 9124.628: 81.6562% ( 31) 00:07:47.018 9124.628 - 9175.040: 81.9000% ( 39) 00:07:47.018 9175.040 - 9225.452: 82.2188% ( 51) 00:07:47.018 9225.452 - 9275.865: 82.4313% ( 34) 00:07:47.018 9275.865 - 9326.277: 82.6125% ( 29) 00:07:47.018 9326.277 - 9376.689: 82.8063% ( 31) 00:07:47.018 9376.689 - 9427.102: 82.9875% ( 29) 00:07:47.018 9427.102 - 9477.514: 83.0812% ( 15) 00:07:47.018 9477.514 - 9527.926: 83.1813% ( 16) 00:07:47.018 9527.926 - 9578.338: 83.2687% ( 14) 00:07:47.018 9578.338 - 9628.751: 83.4313% ( 26) 00:07:47.018 9628.751 - 9679.163: 83.5312% ( 16) 00:07:47.018 9679.163 - 9729.575: 83.6188% ( 14) 00:07:47.019 9729.575 - 9779.988: 83.7000% ( 13) 00:07:47.019 9779.988 - 9830.400: 83.8000% ( 16) 00:07:47.019 9830.400 - 9880.812: 83.8812% ( 13) 00:07:47.019 9880.812 - 9931.225: 83.9562% ( 12) 00:07:47.019 9931.225 - 9981.637: 84.0438% ( 14) 00:07:47.019 9981.637 - 10032.049: 84.1625% ( 19) 00:07:47.019 10032.049 - 10082.462: 84.2313% ( 11) 00:07:47.019 10082.462 - 10132.874: 84.2625% ( 5) 00:07:47.019 10132.874 - 10183.286: 84.3375% ( 12) 00:07:47.019 10183.286 - 10233.698: 84.4000% ( 10) 00:07:47.019 10233.698 - 10284.111: 84.4688% ( 11) 00:07:47.019 10284.111 - 10334.523: 84.5438% ( 12) 00:07:47.019 10334.523 - 10384.935: 84.6312% ( 14) 00:07:47.019 10384.935 - 10435.348: 84.7125% ( 13) 00:07:47.019 10435.348 - 10485.760: 84.8063% ( 15) 00:07:47.019 10485.760 - 10536.172: 84.9437% ( 22) 00:07:47.019 10536.172 - 10586.585: 85.0875% ( 23) 00:07:47.019 10586.585 - 10636.997: 85.1688% ( 13) 00:07:47.019 10636.997 - 10687.409: 85.3125% ( 23) 00:07:47.019 10687.409 - 10737.822: 85.3812% ( 11) 00:07:47.019 10737.822 - 10788.234: 85.5750% ( 31) 00:07:47.019 10788.234 - 10838.646: 85.8063% ( 37) 00:07:47.019 10838.646 - 10889.058: 86.0375% ( 37) 00:07:47.019 10889.058 - 10939.471: 86.2062% ( 27) 00:07:47.019 10939.471 - 10989.883: 86.4625% ( 41) 00:07:47.019 10989.883 - 11040.295: 86.6312% ( 27) 00:07:47.019 11040.295 - 11090.708: 86.7438% ( 18) 00:07:47.019 11090.708 - 11141.120: 86.9750% ( 37) 00:07:47.019 11141.120 - 11191.532: 87.2062% ( 37) 00:07:47.019 11191.532 - 11241.945: 87.6875% ( 77) 00:07:47.019 11241.945 - 11292.357: 87.9437% ( 41) 00:07:47.019 11292.357 - 11342.769: 88.1750% ( 37) 00:07:47.019 11342.769 - 11393.182: 88.4250% ( 40) 00:07:47.019 11393.182 - 11443.594: 88.6250% ( 32) 00:07:47.019 11443.594 - 11494.006: 88.7938% ( 27) 00:07:47.019 11494.006 - 11544.418: 89.0125% ( 35) 00:07:47.019 11544.418 - 11594.831: 89.1937% ( 29) 00:07:47.019 11594.831 - 11645.243: 89.4250% ( 37) 00:07:47.019 11645.243 - 11695.655: 89.6375% ( 34) 00:07:47.019 11695.655 - 11746.068: 89.8625% ( 36) 00:07:47.019 11746.068 - 11796.480: 90.0563% ( 31) 00:07:47.019 11796.480 - 11846.892: 90.2750% ( 35) 00:07:47.019 11846.892 - 11897.305: 90.4188% ( 23) 00:07:47.019 11897.305 - 11947.717: 90.5938% ( 28) 00:07:47.019 11947.717 - 11998.129: 90.7750% ( 29) 00:07:47.019 11998.129 - 12048.542: 90.9750% ( 32) 00:07:47.019 12048.542 - 12098.954: 91.2000% ( 36) 00:07:47.019 12098.954 - 12149.366: 91.3438% ( 23) 00:07:47.019 12149.366 - 12199.778: 91.5375% ( 31) 00:07:47.019 12199.778 - 12250.191: 91.6625% ( 20) 00:07:47.019 12250.191 - 12300.603: 91.7750% ( 18) 00:07:47.019 12300.603 - 12351.015: 91.9750% ( 32) 00:07:47.019 12351.015 - 12401.428: 92.1562% ( 29) 00:07:47.019 12401.428 - 12451.840: 92.2875% ( 21) 00:07:47.019 12451.840 - 12502.252: 92.5062% ( 35) 00:07:47.019 12502.252 - 12552.665: 92.7625% ( 41) 00:07:47.019 12552.665 - 12603.077: 93.0563% ( 47) 00:07:47.019 12603.077 - 12653.489: 93.2562% ( 32) 00:07:47.019 12653.489 - 12703.902: 93.4188% ( 26) 00:07:47.019 12703.902 - 12754.314: 93.5938% ( 28) 00:07:47.019 12754.314 - 12804.726: 93.7438% ( 24) 00:07:47.019 12804.726 - 12855.138: 94.0062% ( 42) 00:07:47.019 12855.138 - 12905.551: 94.1875% ( 29) 00:07:47.019 12905.551 - 13006.375: 94.6063% ( 67) 00:07:47.019 13006.375 - 13107.200: 95.0062% ( 64) 00:07:47.019 13107.200 - 13208.025: 95.3812% ( 60) 00:07:47.019 13208.025 - 13308.849: 95.7125% ( 53) 00:07:47.019 13308.849 - 13409.674: 95.9813% ( 43) 00:07:47.019 13409.674 - 13510.498: 96.1750% ( 31) 00:07:47.019 13510.498 - 13611.323: 96.3937% ( 35) 00:07:47.019 13611.323 - 13712.148: 96.5938% ( 32) 00:07:47.019 13712.148 - 13812.972: 96.7687% ( 28) 00:07:47.019 13812.972 - 13913.797: 96.9500% ( 29) 00:07:47.019 13913.797 - 14014.622: 97.1437% ( 31) 00:07:47.019 14014.622 - 14115.446: 97.2812% ( 22) 00:07:47.019 14115.446 - 14216.271: 97.3250% ( 7) 00:07:47.019 14216.271 - 14317.095: 97.4188% ( 15) 00:07:47.019 14317.095 - 14417.920: 97.5062% ( 14) 00:07:47.019 14417.920 - 14518.745: 97.6188% ( 18) 00:07:47.019 14518.745 - 14619.569: 97.7875% ( 27) 00:07:47.019 14619.569 - 14720.394: 97.9125% ( 20) 00:07:47.019 14720.394 - 14821.218: 98.0750% ( 26) 00:07:47.019 14821.218 - 14922.043: 98.2313% ( 25) 00:07:47.019 14922.043 - 15022.868: 98.5125% ( 45) 00:07:47.019 15022.868 - 15123.692: 98.6375% ( 20) 00:07:47.019 15123.692 - 15224.517: 98.7625% ( 20) 00:07:47.019 15224.517 - 15325.342: 98.8875% ( 20) 00:07:47.019 15325.342 - 15426.166: 98.9813% ( 15) 00:07:47.019 15426.166 - 15526.991: 99.0687% ( 14) 00:07:47.019 15526.991 - 15627.815: 99.1312% ( 10) 00:07:47.019 15627.815 - 15728.640: 99.1375% ( 1) 00:07:47.019 15728.640 - 15829.465: 99.1500% ( 2) 00:07:47.019 15829.465 - 15930.289: 99.1625% ( 2) 00:07:47.019 15930.289 - 16031.114: 99.1750% ( 2) 00:07:47.019 16031.114 - 16131.938: 99.1813% ( 1) 00:07:47.019 16131.938 - 16232.763: 99.2000% ( 3) 00:07:47.019 26416.049 - 26617.698: 99.2313% ( 5) 00:07:47.019 26617.698 - 26819.348: 99.2938% ( 10) 00:07:47.019 26819.348 - 27020.997: 99.3438% ( 8) 00:07:47.019 27020.997 - 27222.646: 99.3937% ( 8) 00:07:47.019 27222.646 - 27424.295: 99.4562% ( 10) 00:07:47.019 27424.295 - 27625.945: 99.4625% ( 1) 00:07:47.019 27625.945 - 27827.594: 99.5125% ( 8) 00:07:47.019 27827.594 - 28029.243: 99.5563% ( 7) 00:07:47.019 28029.243 - 28230.892: 99.5938% ( 6) 00:07:47.019 28230.892 - 28432.542: 99.6000% ( 1) 00:07:47.019 31457.280 - 31658.929: 99.6312% ( 5) 00:07:47.019 31658.929 - 31860.578: 99.6750% ( 7) 00:07:47.019 31860.578 - 32062.228: 99.7125% ( 6) 00:07:47.019 32062.228 - 32263.877: 99.7562% ( 7) 00:07:47.019 32263.877 - 32465.526: 99.8000% ( 7) 00:07:47.019 32465.526 - 32667.175: 99.8438% ( 7) 00:07:47.019 32667.175 - 32868.825: 99.8875% ( 7) 00:07:47.019 32868.825 - 33070.474: 99.9313% ( 7) 00:07:47.019 33070.474 - 33272.123: 99.9750% ( 7) 00:07:47.019 33272.123 - 33473.772: 100.0000% ( 4) 00:07:47.019 00:07:47.019 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:47.019 ============================================================================== 00:07:47.019 Range in us Cumulative IO count 00:07:47.019 5822.622 - 5847.828: 0.0063% ( 1) 00:07:47.019 5973.858 - 5999.065: 0.0187% ( 2) 00:07:47.019 5999.065 - 6024.271: 0.0250% ( 1) 00:07:47.019 6074.683 - 6099.889: 0.0375% ( 2) 00:07:47.019 6099.889 - 6125.095: 0.0625% ( 4) 00:07:47.019 6125.095 - 6150.302: 0.1000% ( 6) 00:07:47.019 6150.302 - 6175.508: 0.1812% ( 13) 00:07:47.019 6175.508 - 6200.714: 0.5062% ( 52) 00:07:47.019 6200.714 - 6225.920: 0.6188% ( 18) 00:07:47.019 6225.920 - 6251.126: 0.7250% ( 17) 00:07:47.019 6251.126 - 6276.332: 1.0750% ( 56) 00:07:47.019 6276.332 - 6301.538: 1.2937% ( 35) 00:07:47.019 6301.538 - 6326.745: 1.8125% ( 83) 00:07:47.019 6326.745 - 6351.951: 2.0312% ( 35) 00:07:47.019 6351.951 - 6377.157: 2.3500% ( 51) 00:07:47.019 6377.157 - 6402.363: 2.9625% ( 98) 00:07:47.019 6402.363 - 6427.569: 3.8687% ( 145) 00:07:47.019 6427.569 - 6452.775: 4.3063% ( 70) 00:07:47.019 6452.775 - 6503.188: 5.5500% ( 199) 00:07:47.019 6503.188 - 6553.600: 7.8812% ( 373) 00:07:47.019 6553.600 - 6604.012: 11.8500% ( 635) 00:07:47.019 6604.012 - 6654.425: 16.2500% ( 704) 00:07:47.019 6654.425 - 6704.837: 22.7312% ( 1037) 00:07:47.019 6704.837 - 6755.249: 32.3312% ( 1536) 00:07:47.019 6755.249 - 6805.662: 40.0625% ( 1237) 00:07:47.019 6805.662 - 6856.074: 46.1938% ( 981) 00:07:47.019 6856.074 - 6906.486: 51.4813% ( 846) 00:07:47.019 6906.486 - 6956.898: 55.1437% ( 586) 00:07:47.019 6956.898 - 7007.311: 58.1750% ( 485) 00:07:47.019 7007.311 - 7057.723: 60.6750% ( 400) 00:07:47.019 7057.723 - 7108.135: 62.6562% ( 317) 00:07:47.019 7108.135 - 7158.548: 64.5500% ( 303) 00:07:47.019 7158.548 - 7208.960: 65.8250% ( 204) 00:07:47.019 7208.960 - 7259.372: 67.6437% ( 291) 00:07:47.019 7259.372 - 7309.785: 69.1250% ( 237) 00:07:47.019 7309.785 - 7360.197: 70.6562% ( 245) 00:07:47.019 7360.197 - 7410.609: 71.7000% ( 167) 00:07:47.019 7410.609 - 7461.022: 72.3000% ( 96) 00:07:47.019 7461.022 - 7511.434: 72.8375% ( 86) 00:07:47.019 7511.434 - 7561.846: 73.2375% ( 64) 00:07:47.019 7561.846 - 7612.258: 73.5687% ( 53) 00:07:47.019 7612.258 - 7662.671: 73.7500% ( 29) 00:07:47.019 7662.671 - 7713.083: 73.8937% ( 23) 00:07:47.019 7713.083 - 7763.495: 74.0875% ( 31) 00:07:47.019 7763.495 - 7813.908: 74.4500% ( 58) 00:07:47.019 7813.908 - 7864.320: 74.6937% ( 39) 00:07:47.019 7864.320 - 7914.732: 74.9875% ( 47) 00:07:47.019 7914.732 - 7965.145: 75.3000% ( 50) 00:07:47.019 7965.145 - 8015.557: 75.8250% ( 84) 00:07:47.019 8015.557 - 8065.969: 76.1875% ( 58) 00:07:47.019 8065.969 - 8116.382: 76.5187% ( 53) 00:07:47.019 8116.382 - 8166.794: 76.8438% ( 52) 00:07:47.019 8166.794 - 8217.206: 77.0812% ( 38) 00:07:47.019 8217.206 - 8267.618: 77.3000% ( 35) 00:07:47.019 8267.618 - 8318.031: 77.5563% ( 41) 00:07:47.019 8318.031 - 8368.443: 77.8000% ( 39) 00:07:47.019 8368.443 - 8418.855: 78.2000% ( 64) 00:07:47.019 8418.855 - 8469.268: 78.6000% ( 64) 00:07:47.019 8469.268 - 8519.680: 78.9313% ( 53) 00:07:47.019 8519.680 - 8570.092: 79.1625% ( 37) 00:07:47.019 8570.092 - 8620.505: 79.3937% ( 37) 00:07:47.019 8620.505 - 8670.917: 79.6500% ( 41) 00:07:47.019 8670.917 - 8721.329: 79.9375% ( 46) 00:07:47.019 8721.329 - 8771.742: 80.1562% ( 35) 00:07:47.020 8771.742 - 8822.154: 80.4188% ( 42) 00:07:47.020 8822.154 - 8872.566: 80.7125% ( 47) 00:07:47.020 8872.566 - 8922.978: 80.9625% ( 40) 00:07:47.020 8922.978 - 8973.391: 81.1750% ( 34) 00:07:47.020 8973.391 - 9023.803: 81.3937% ( 35) 00:07:47.020 9023.803 - 9074.215: 81.6375% ( 39) 00:07:47.020 9074.215 - 9124.628: 81.8375% ( 32) 00:07:47.020 9124.628 - 9175.040: 82.1250% ( 46) 00:07:47.020 9175.040 - 9225.452: 82.3500% ( 36) 00:07:47.020 9225.452 - 9275.865: 82.5438% ( 31) 00:07:47.020 9275.865 - 9326.277: 82.7438% ( 32) 00:07:47.020 9326.277 - 9376.689: 82.9125% ( 27) 00:07:47.020 9376.689 - 9427.102: 83.0875% ( 28) 00:07:47.020 9427.102 - 9477.514: 83.2562% ( 27) 00:07:47.020 9477.514 - 9527.926: 83.4000% ( 23) 00:07:47.020 9527.926 - 9578.338: 83.4875% ( 14) 00:07:47.020 9578.338 - 9628.751: 83.6000% ( 18) 00:07:47.020 9628.751 - 9679.163: 83.7250% ( 20) 00:07:47.020 9679.163 - 9729.575: 83.8187% ( 15) 00:07:47.020 9729.575 - 9779.988: 83.9750% ( 25) 00:07:47.020 9779.988 - 9830.400: 84.0938% ( 19) 00:07:47.020 9830.400 - 9880.812: 84.1937% ( 16) 00:07:47.020 9880.812 - 9931.225: 84.3000% ( 17) 00:07:47.020 9931.225 - 9981.637: 84.4000% ( 16) 00:07:47.020 9981.637 - 10032.049: 84.4938% ( 15) 00:07:47.020 10032.049 - 10082.462: 84.6125% ( 19) 00:07:47.020 10082.462 - 10132.874: 84.7250% ( 18) 00:07:47.020 10132.874 - 10183.286: 84.7812% ( 9) 00:07:47.020 10183.286 - 10233.698: 84.8563% ( 12) 00:07:47.020 10233.698 - 10284.111: 84.9562% ( 16) 00:07:47.020 10284.111 - 10334.523: 85.0062% ( 8) 00:07:47.020 10334.523 - 10384.935: 85.0375% ( 5) 00:07:47.020 10384.935 - 10435.348: 85.0625% ( 4) 00:07:47.020 10435.348 - 10485.760: 85.0938% ( 5) 00:07:47.020 10485.760 - 10536.172: 85.1562% ( 10) 00:07:47.020 10536.172 - 10586.585: 85.2188% ( 10) 00:07:47.020 10586.585 - 10636.997: 85.2625% ( 7) 00:07:47.020 10636.997 - 10687.409: 85.3312% ( 11) 00:07:47.020 10687.409 - 10737.822: 85.4250% ( 15) 00:07:47.020 10737.822 - 10788.234: 85.5875% ( 26) 00:07:47.020 10788.234 - 10838.646: 85.8187% ( 37) 00:07:47.020 10838.646 - 10889.058: 86.0875% ( 43) 00:07:47.020 10889.058 - 10939.471: 86.2625% ( 28) 00:07:47.020 10939.471 - 10989.883: 86.4250% ( 26) 00:07:47.020 10989.883 - 11040.295: 86.6375% ( 34) 00:07:47.020 11040.295 - 11090.708: 86.8187% ( 29) 00:07:47.020 11090.708 - 11141.120: 87.0625% ( 39) 00:07:47.020 11141.120 - 11191.532: 87.3688% ( 49) 00:07:47.020 11191.532 - 11241.945: 87.6000% ( 37) 00:07:47.020 11241.945 - 11292.357: 87.8563% ( 41) 00:07:47.020 11292.357 - 11342.769: 88.1125% ( 41) 00:07:47.020 11342.769 - 11393.182: 88.4000% ( 46) 00:07:47.020 11393.182 - 11443.594: 88.6250% ( 36) 00:07:47.020 11443.594 - 11494.006: 88.9000% ( 44) 00:07:47.020 11494.006 - 11544.418: 89.1562% ( 41) 00:07:47.020 11544.418 - 11594.831: 89.4313% ( 44) 00:07:47.020 11594.831 - 11645.243: 89.6750% ( 39) 00:07:47.020 11645.243 - 11695.655: 89.9250% ( 40) 00:07:47.020 11695.655 - 11746.068: 90.1625% ( 38) 00:07:47.020 11746.068 - 11796.480: 90.3625% ( 32) 00:07:47.020 11796.480 - 11846.892: 90.5500% ( 30) 00:07:47.020 11846.892 - 11897.305: 90.7000% ( 24) 00:07:47.020 11897.305 - 11947.717: 90.8937% ( 31) 00:07:47.020 11947.717 - 11998.129: 91.0687% ( 28) 00:07:47.020 11998.129 - 12048.542: 91.1937% ( 20) 00:07:47.020 12048.542 - 12098.954: 91.3438% ( 24) 00:07:47.020 12098.954 - 12149.366: 91.5438% ( 32) 00:07:47.020 12149.366 - 12199.778: 91.8187% ( 44) 00:07:47.020 12199.778 - 12250.191: 92.1625% ( 55) 00:07:47.020 12250.191 - 12300.603: 92.2875% ( 20) 00:07:47.020 12300.603 - 12351.015: 92.4250% ( 22) 00:07:47.020 12351.015 - 12401.428: 92.5687% ( 23) 00:07:47.020 12401.428 - 12451.840: 92.6937% ( 20) 00:07:47.020 12451.840 - 12502.252: 92.8000% ( 17) 00:07:47.020 12502.252 - 12552.665: 92.8438% ( 7) 00:07:47.020 12552.665 - 12603.077: 92.9500% ( 17) 00:07:47.020 12603.077 - 12653.489: 93.0938% ( 23) 00:07:47.020 12653.489 - 12703.902: 93.2062% ( 18) 00:07:47.020 12703.902 - 12754.314: 93.3563% ( 24) 00:07:47.020 12754.314 - 12804.726: 93.5438% ( 30) 00:07:47.020 12804.726 - 12855.138: 93.7000% ( 25) 00:07:47.020 12855.138 - 12905.551: 93.8375% ( 22) 00:07:47.020 12905.551 - 13006.375: 94.0938% ( 41) 00:07:47.020 13006.375 - 13107.200: 94.3688% ( 44) 00:07:47.020 13107.200 - 13208.025: 94.6562% ( 46) 00:07:47.020 13208.025 - 13308.849: 95.0500% ( 63) 00:07:47.020 13308.849 - 13409.674: 95.3187% ( 43) 00:07:47.020 13409.674 - 13510.498: 95.6625% ( 55) 00:07:47.020 13510.498 - 13611.323: 95.9437% ( 45) 00:07:47.020 13611.323 - 13712.148: 96.3000% ( 57) 00:07:47.020 13712.148 - 13812.972: 96.6063% ( 49) 00:07:47.020 13812.972 - 13913.797: 96.8312% ( 36) 00:07:47.020 13913.797 - 14014.622: 97.0250% ( 31) 00:07:47.020 14014.622 - 14115.446: 97.1937% ( 27) 00:07:47.020 14115.446 - 14216.271: 97.3000% ( 17) 00:07:47.020 14216.271 - 14317.095: 97.3750% ( 12) 00:07:47.020 14317.095 - 14417.920: 97.4750% ( 16) 00:07:47.020 14417.920 - 14518.745: 97.6000% ( 20) 00:07:47.020 14518.745 - 14619.569: 97.8688% ( 43) 00:07:47.020 14619.569 - 14720.394: 97.9562% ( 14) 00:07:47.020 14720.394 - 14821.218: 98.0625% ( 17) 00:07:47.020 14821.218 - 14922.043: 98.1500% ( 14) 00:07:47.020 14922.043 - 15022.868: 98.2812% ( 21) 00:07:47.020 15022.868 - 15123.692: 98.4562% ( 28) 00:07:47.020 15123.692 - 15224.517: 98.5500% ( 15) 00:07:47.020 15224.517 - 15325.342: 98.6562% ( 17) 00:07:47.020 15325.342 - 15426.166: 98.8125% ( 25) 00:07:47.020 15426.166 - 15526.991: 99.0000% ( 30) 00:07:47.020 15526.991 - 15627.815: 99.1063% ( 17) 00:07:47.020 15627.815 - 15728.640: 99.1562% ( 8) 00:07:47.020 15728.640 - 15829.465: 99.1937% ( 6) 00:07:47.020 15829.465 - 15930.289: 99.2000% ( 1) 00:07:47.020 24298.732 - 24399.557: 99.2250% ( 4) 00:07:47.020 24399.557 - 24500.382: 99.2500% ( 4) 00:07:47.020 24500.382 - 24601.206: 99.2687% ( 3) 00:07:47.020 24601.206 - 24702.031: 99.2938% ( 4) 00:07:47.020 24702.031 - 24802.855: 99.3125% ( 3) 00:07:47.020 24802.855 - 24903.680: 99.3375% ( 4) 00:07:47.020 24903.680 - 25004.505: 99.3625% ( 4) 00:07:47.020 25004.505 - 25105.329: 99.3875% ( 4) 00:07:47.020 25105.329 - 25206.154: 99.4062% ( 3) 00:07:47.020 25206.154 - 25306.978: 99.4313% ( 4) 00:07:47.020 25306.978 - 25407.803: 99.4562% ( 4) 00:07:47.020 25407.803 - 25508.628: 99.4750% ( 3) 00:07:47.020 25508.628 - 25609.452: 99.5000% ( 4) 00:07:47.020 25609.452 - 25710.277: 99.5187% ( 3) 00:07:47.020 25710.277 - 25811.102: 99.5438% ( 4) 00:07:47.020 25811.102 - 26012.751: 99.5875% ( 7) 00:07:47.020 26012.751 - 26214.400: 99.6000% ( 2) 00:07:47.020 29239.138 - 29440.788: 99.6125% ( 2) 00:07:47.020 29440.788 - 29642.437: 99.6562% ( 7) 00:07:47.020 29642.437 - 29844.086: 99.7000% ( 7) 00:07:47.020 29844.086 - 30045.735: 99.7500% ( 8) 00:07:47.020 30045.735 - 30247.385: 99.7938% ( 7) 00:07:47.020 30247.385 - 30449.034: 99.8312% ( 6) 00:07:47.020 30449.034 - 30650.683: 99.8750% ( 7) 00:07:47.020 30650.683 - 30852.332: 99.9250% ( 8) 00:07:47.020 30852.332 - 31053.982: 99.9750% ( 8) 00:07:47.020 31053.982 - 31255.631: 100.0000% ( 4) 00:07:47.020 00:07:47.020 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:47.020 ============================================================================== 00:07:47.020 Range in us Cumulative IO count 00:07:47.020 5923.446 - 5948.652: 0.0063% ( 1) 00:07:47.020 5973.858 - 5999.065: 0.0187% ( 2) 00:07:47.020 5999.065 - 6024.271: 0.0250% ( 1) 00:07:47.020 6024.271 - 6049.477: 0.0750% ( 8) 00:07:47.020 6049.477 - 6074.683: 0.0938% ( 3) 00:07:47.020 6074.683 - 6099.889: 0.1375% ( 7) 00:07:47.020 6099.889 - 6125.095: 0.1812% ( 7) 00:07:47.020 6125.095 - 6150.302: 0.2375% ( 9) 00:07:47.020 6150.302 - 6175.508: 0.3000% ( 10) 00:07:47.020 6175.508 - 6200.714: 0.5312% ( 37) 00:07:47.020 6200.714 - 6225.920: 0.6562% ( 20) 00:07:47.020 6225.920 - 6251.126: 0.8688% ( 34) 00:07:47.020 6251.126 - 6276.332: 1.2188% ( 56) 00:07:47.020 6276.332 - 6301.538: 1.5000% ( 45) 00:07:47.020 6301.538 - 6326.745: 1.7375% ( 38) 00:07:47.020 6326.745 - 6351.951: 2.2375% ( 80) 00:07:47.020 6351.951 - 6377.157: 2.8500% ( 98) 00:07:47.020 6377.157 - 6402.363: 3.3875% ( 86) 00:07:47.020 6402.363 - 6427.569: 4.0438% ( 105) 00:07:47.020 6427.569 - 6452.775: 4.9875% ( 151) 00:07:47.020 6452.775 - 6503.188: 6.7188% ( 277) 00:07:47.020 6503.188 - 6553.600: 9.1813% ( 394) 00:07:47.020 6553.600 - 6604.012: 13.1375% ( 633) 00:07:47.020 6604.012 - 6654.425: 17.9750% ( 774) 00:07:47.020 6654.425 - 6704.837: 24.3750% ( 1024) 00:07:47.020 6704.837 - 6755.249: 32.8875% ( 1362) 00:07:47.020 6755.249 - 6805.662: 39.7375% ( 1096) 00:07:47.020 6805.662 - 6856.074: 45.7188% ( 957) 00:07:47.020 6856.074 - 6906.486: 50.2500% ( 725) 00:07:47.020 6906.486 - 6956.898: 54.1437% ( 623) 00:07:47.020 6956.898 - 7007.311: 57.6313% ( 558) 00:07:47.020 7007.311 - 7057.723: 60.7750% ( 503) 00:07:47.020 7057.723 - 7108.135: 62.9813% ( 353) 00:07:47.020 7108.135 - 7158.548: 64.8500% ( 299) 00:07:47.020 7158.548 - 7208.960: 66.5500% ( 272) 00:07:47.020 7208.960 - 7259.372: 67.8063% ( 201) 00:07:47.020 7259.372 - 7309.785: 69.1937% ( 222) 00:07:47.020 7309.785 - 7360.197: 69.8375% ( 103) 00:07:47.020 7360.197 - 7410.609: 70.3750% ( 86) 00:07:47.020 7410.609 - 7461.022: 71.1562% ( 125) 00:07:47.021 7461.022 - 7511.434: 71.4813% ( 52) 00:07:47.021 7511.434 - 7561.846: 71.9313% ( 72) 00:07:47.021 7561.846 - 7612.258: 72.3438% ( 66) 00:07:47.021 7612.258 - 7662.671: 72.6750% ( 53) 00:07:47.021 7662.671 - 7713.083: 73.0375% ( 58) 00:07:47.021 7713.083 - 7763.495: 73.3187% ( 45) 00:07:47.021 7763.495 - 7813.908: 73.6000% ( 45) 00:07:47.021 7813.908 - 7864.320: 73.8812% ( 45) 00:07:47.021 7864.320 - 7914.732: 74.2250% ( 55) 00:07:47.021 7914.732 - 7965.145: 74.6437% ( 67) 00:07:47.021 7965.145 - 8015.557: 75.1937% ( 88) 00:07:47.021 8015.557 - 8065.969: 75.6875% ( 79) 00:07:47.021 8065.969 - 8116.382: 76.0500% ( 58) 00:07:47.021 8116.382 - 8166.794: 76.5687% ( 83) 00:07:47.021 8166.794 - 8217.206: 76.9000% ( 53) 00:07:47.021 8217.206 - 8267.618: 77.3750% ( 76) 00:07:47.021 8267.618 - 8318.031: 77.7062% ( 53) 00:07:47.021 8318.031 - 8368.443: 78.0438% ( 54) 00:07:47.021 8368.443 - 8418.855: 78.2812% ( 38) 00:07:47.021 8418.855 - 8469.268: 78.5812% ( 48) 00:07:47.021 8469.268 - 8519.680: 78.8438% ( 42) 00:07:47.021 8519.680 - 8570.092: 78.9500% ( 17) 00:07:47.021 8570.092 - 8620.505: 79.1000% ( 24) 00:07:47.021 8620.505 - 8670.917: 79.2562% ( 25) 00:07:47.021 8670.917 - 8721.329: 79.4750% ( 35) 00:07:47.021 8721.329 - 8771.742: 79.7250% ( 40) 00:07:47.021 8771.742 - 8822.154: 79.8937% ( 27) 00:07:47.021 8822.154 - 8872.566: 80.2812% ( 62) 00:07:47.021 8872.566 - 8922.978: 80.5250% ( 39) 00:07:47.021 8922.978 - 8973.391: 80.7625% ( 38) 00:07:47.021 8973.391 - 9023.803: 80.9875% ( 36) 00:07:47.021 9023.803 - 9074.215: 81.2125% ( 36) 00:07:47.021 9074.215 - 9124.628: 81.4062% ( 31) 00:07:47.021 9124.628 - 9175.040: 81.5438% ( 22) 00:07:47.021 9175.040 - 9225.452: 81.7000% ( 25) 00:07:47.021 9225.452 - 9275.865: 81.9437% ( 39) 00:07:47.021 9275.865 - 9326.277: 82.1000% ( 25) 00:07:47.021 9326.277 - 9376.689: 82.3187% ( 35) 00:07:47.021 9376.689 - 9427.102: 82.5000% ( 29) 00:07:47.021 9427.102 - 9477.514: 82.7562% ( 41) 00:07:47.021 9477.514 - 9527.926: 82.9000% ( 23) 00:07:47.021 9527.926 - 9578.338: 83.0250% ( 20) 00:07:47.021 9578.338 - 9628.751: 83.1813% ( 25) 00:07:47.021 9628.751 - 9679.163: 83.4625% ( 45) 00:07:47.021 9679.163 - 9729.575: 83.5938% ( 21) 00:07:47.021 9729.575 - 9779.988: 83.6688% ( 12) 00:07:47.021 9779.988 - 9830.400: 83.7625% ( 15) 00:07:47.021 9830.400 - 9880.812: 83.9125% ( 24) 00:07:47.021 9880.812 - 9931.225: 84.0938% ( 29) 00:07:47.021 9931.225 - 9981.637: 84.3937% ( 48) 00:07:47.021 9981.637 - 10032.049: 84.5750% ( 29) 00:07:47.021 10032.049 - 10082.462: 84.7188% ( 23) 00:07:47.021 10082.462 - 10132.874: 84.8875% ( 27) 00:07:47.021 10132.874 - 10183.286: 85.2250% ( 54) 00:07:47.021 10183.286 - 10233.698: 85.4062% ( 29) 00:07:47.021 10233.698 - 10284.111: 85.5687% ( 26) 00:07:47.021 10284.111 - 10334.523: 85.7000% ( 21) 00:07:47.021 10334.523 - 10384.935: 85.8312% ( 21) 00:07:47.021 10384.935 - 10435.348: 85.9000% ( 11) 00:07:47.021 10435.348 - 10485.760: 85.9500% ( 8) 00:07:47.021 10485.760 - 10536.172: 85.9813% ( 5) 00:07:47.021 10536.172 - 10586.585: 86.0000% ( 3) 00:07:47.021 10586.585 - 10636.997: 86.0125% ( 2) 00:07:47.021 10636.997 - 10687.409: 86.0187% ( 1) 00:07:47.021 10687.409 - 10737.822: 86.0438% ( 4) 00:07:47.021 10737.822 - 10788.234: 86.1375% ( 15) 00:07:47.021 10788.234 - 10838.646: 86.2875% ( 24) 00:07:47.021 10838.646 - 10889.058: 86.5125% ( 36) 00:07:47.021 10889.058 - 10939.471: 86.7500% ( 38) 00:07:47.021 10939.471 - 10989.883: 86.9875% ( 38) 00:07:47.021 10989.883 - 11040.295: 87.1562% ( 27) 00:07:47.021 11040.295 - 11090.708: 87.3438% ( 30) 00:07:47.021 11090.708 - 11141.120: 87.5875% ( 39) 00:07:47.021 11141.120 - 11191.532: 87.7812% ( 31) 00:07:47.021 11191.532 - 11241.945: 87.9437% ( 26) 00:07:47.021 11241.945 - 11292.357: 88.0625% ( 19) 00:07:47.021 11292.357 - 11342.769: 88.2375% ( 28) 00:07:47.021 11342.769 - 11393.182: 88.4437% ( 33) 00:07:47.021 11393.182 - 11443.594: 88.6562% ( 34) 00:07:47.021 11443.594 - 11494.006: 88.8375% ( 29) 00:07:47.021 11494.006 - 11544.418: 89.0062% ( 27) 00:07:47.021 11544.418 - 11594.831: 89.2125% ( 33) 00:07:47.021 11594.831 - 11645.243: 89.5187% ( 49) 00:07:47.021 11645.243 - 11695.655: 89.8063% ( 46) 00:07:47.021 11695.655 - 11746.068: 90.0438% ( 38) 00:07:47.021 11746.068 - 11796.480: 90.2625% ( 35) 00:07:47.021 11796.480 - 11846.892: 90.4437% ( 29) 00:07:47.021 11846.892 - 11897.305: 90.6375% ( 31) 00:07:47.021 11897.305 - 11947.717: 90.8750% ( 38) 00:07:47.021 11947.717 - 11998.129: 91.0438% ( 27) 00:07:47.021 11998.129 - 12048.542: 91.1937% ( 24) 00:07:47.021 12048.542 - 12098.954: 91.4000% ( 33) 00:07:47.021 12098.954 - 12149.366: 91.5500% ( 24) 00:07:47.021 12149.366 - 12199.778: 91.6688% ( 19) 00:07:47.021 12199.778 - 12250.191: 91.7875% ( 19) 00:07:47.021 12250.191 - 12300.603: 91.9625% ( 28) 00:07:47.021 12300.603 - 12351.015: 92.1000% ( 22) 00:07:47.021 12351.015 - 12401.428: 92.3063% ( 33) 00:07:47.021 12401.428 - 12451.840: 92.4562% ( 24) 00:07:47.021 12451.840 - 12502.252: 92.6375% ( 29) 00:07:47.021 12502.252 - 12552.665: 92.8937% ( 41) 00:07:47.021 12552.665 - 12603.077: 93.0500% ( 25) 00:07:47.021 12603.077 - 12653.489: 93.1937% ( 23) 00:07:47.021 12653.489 - 12703.902: 93.3250% ( 21) 00:07:47.021 12703.902 - 12754.314: 93.4562% ( 21) 00:07:47.021 12754.314 - 12804.726: 93.5563% ( 16) 00:07:47.021 12804.726 - 12855.138: 93.7313% ( 28) 00:07:47.021 12855.138 - 12905.551: 93.8375% ( 17) 00:07:47.021 12905.551 - 13006.375: 94.0000% ( 26) 00:07:47.021 13006.375 - 13107.200: 94.1937% ( 31) 00:07:47.021 13107.200 - 13208.025: 94.4500% ( 41) 00:07:47.021 13208.025 - 13308.849: 94.6750% ( 36) 00:07:47.021 13308.849 - 13409.674: 95.0438% ( 59) 00:07:47.021 13409.674 - 13510.498: 95.4938% ( 72) 00:07:47.021 13510.498 - 13611.323: 95.8875% ( 63) 00:07:47.021 13611.323 - 13712.148: 96.2562% ( 59) 00:07:47.021 13712.148 - 13812.972: 96.5812% ( 52) 00:07:47.021 13812.972 - 13913.797: 96.8500% ( 43) 00:07:47.021 13913.797 - 14014.622: 97.2000% ( 56) 00:07:47.021 14014.622 - 14115.446: 97.4625% ( 42) 00:07:47.021 14115.446 - 14216.271: 97.8438% ( 61) 00:07:47.021 14216.271 - 14317.095: 98.1437% ( 48) 00:07:47.021 14317.095 - 14417.920: 98.2625% ( 19) 00:07:47.021 14417.920 - 14518.745: 98.4000% ( 22) 00:07:47.021 14518.745 - 14619.569: 98.5187% ( 19) 00:07:47.021 14619.569 - 14720.394: 98.5938% ( 12) 00:07:47.021 14720.394 - 14821.218: 98.6375% ( 7) 00:07:47.021 14821.218 - 14922.043: 98.6813% ( 7) 00:07:47.021 14922.043 - 15022.868: 98.7125% ( 5) 00:07:47.021 15022.868 - 15123.692: 98.7375% ( 4) 00:07:47.021 15123.692 - 15224.517: 98.7562% ( 3) 00:07:47.021 15224.517 - 15325.342: 98.8063% ( 8) 00:07:47.021 15325.342 - 15426.166: 98.8438% ( 6) 00:07:47.021 15426.166 - 15526.991: 98.8875% ( 7) 00:07:47.021 15526.991 - 15627.815: 98.9000% ( 2) 00:07:47.021 15627.815 - 15728.640: 98.9250% ( 4) 00:07:47.021 15728.640 - 15829.465: 99.0000% ( 12) 00:07:47.021 15829.465 - 15930.289: 99.1500% ( 24) 00:07:47.021 15930.289 - 16031.114: 99.1625% ( 2) 00:07:47.021 16031.114 - 16131.938: 99.1813% ( 3) 00:07:47.021 16131.938 - 16232.763: 99.1937% ( 2) 00:07:47.021 16232.763 - 16333.588: 99.2000% ( 1) 00:07:47.021 22685.538 - 22786.363: 99.2062% ( 1) 00:07:47.021 22786.363 - 22887.188: 99.2313% ( 4) 00:07:47.021 22887.188 - 22988.012: 99.2500% ( 3) 00:07:47.021 22988.012 - 23088.837: 99.2750% ( 4) 00:07:47.021 23088.837 - 23189.662: 99.2938% ( 3) 00:07:47.021 23189.662 - 23290.486: 99.3187% ( 4) 00:07:47.021 23290.486 - 23391.311: 99.3438% ( 4) 00:07:47.021 23391.311 - 23492.135: 99.3625% ( 3) 00:07:47.021 23492.135 - 23592.960: 99.3875% ( 4) 00:07:47.021 23592.960 - 23693.785: 99.4062% ( 3) 00:07:47.021 23693.785 - 23794.609: 99.4313% ( 4) 00:07:47.021 23794.609 - 23895.434: 99.4437% ( 2) 00:07:47.021 23895.434 - 23996.258: 99.4688% ( 4) 00:07:47.021 23996.258 - 24097.083: 99.4938% ( 4) 00:07:47.021 24097.083 - 24197.908: 99.5125% ( 3) 00:07:47.021 24197.908 - 24298.732: 99.5375% ( 4) 00:07:47.021 24298.732 - 24399.557: 99.5625% ( 4) 00:07:47.021 24399.557 - 24500.382: 99.5812% ( 3) 00:07:47.021 24500.382 - 24601.206: 99.6000% ( 3) 00:07:47.021 27222.646 - 27424.295: 99.6063% ( 1) 00:07:47.021 27424.295 - 27625.945: 99.6500% ( 7) 00:07:47.021 27625.945 - 27827.594: 99.6937% ( 7) 00:07:47.021 27827.594 - 28029.243: 99.7375% ( 7) 00:07:47.021 28029.243 - 28230.892: 99.7812% ( 7) 00:07:47.021 28230.892 - 28432.542: 99.8250% ( 7) 00:07:47.021 28432.542 - 28634.191: 99.8750% ( 8) 00:07:47.021 28634.191 - 28835.840: 99.9188% ( 7) 00:07:47.021 28835.840 - 29037.489: 99.9625% ( 7) 00:07:47.021 29037.489 - 29239.138: 100.0000% ( 6) 00:07:47.021 00:07:47.021 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:47.021 ============================================================================== 00:07:47.021 Range in us Cumulative IO count 00:07:47.021 5772.209 - 5797.415: 0.0063% ( 1) 00:07:47.021 5822.622 - 5847.828: 0.0125% ( 1) 00:07:47.021 5873.034 - 5898.240: 0.0187% ( 1) 00:07:47.021 5973.858 - 5999.065: 0.0250% ( 1) 00:07:47.021 6024.271 - 6049.477: 0.0437% ( 3) 00:07:47.021 6049.477 - 6074.683: 0.0625% ( 3) 00:07:47.021 6074.683 - 6099.889: 0.0938% ( 5) 00:07:47.021 6099.889 - 6125.095: 0.1375% ( 7) 00:07:47.021 6125.095 - 6150.302: 0.1875% ( 8) 00:07:47.021 6150.302 - 6175.508: 0.2625% ( 12) 00:07:47.022 6175.508 - 6200.714: 0.4062% ( 23) 00:07:47.022 6200.714 - 6225.920: 0.6125% ( 33) 00:07:47.022 6225.920 - 6251.126: 0.7688% ( 25) 00:07:47.022 6251.126 - 6276.332: 1.3000% ( 85) 00:07:47.022 6276.332 - 6301.538: 1.5063% ( 33) 00:07:47.022 6301.538 - 6326.745: 1.8438% ( 54) 00:07:47.022 6326.745 - 6351.951: 2.2563% ( 66) 00:07:47.022 6351.951 - 6377.157: 2.9312% ( 108) 00:07:47.022 6377.157 - 6402.363: 3.5562% ( 100) 00:07:47.022 6402.363 - 6427.569: 4.1437% ( 94) 00:07:47.022 6427.569 - 6452.775: 4.9313% ( 126) 00:07:47.022 6452.775 - 6503.188: 6.4250% ( 239) 00:07:47.022 6503.188 - 6553.600: 8.8063% ( 381) 00:07:47.022 6553.600 - 6604.012: 12.5062% ( 592) 00:07:47.022 6604.012 - 6654.425: 17.6125% ( 817) 00:07:47.022 6654.425 - 6704.837: 24.6812% ( 1131) 00:07:47.022 6704.837 - 6755.249: 32.9250% ( 1319) 00:07:47.022 6755.249 - 6805.662: 40.2250% ( 1168) 00:07:47.022 6805.662 - 6856.074: 46.0250% ( 928) 00:07:47.022 6856.074 - 6906.486: 50.4937% ( 715) 00:07:47.022 6906.486 - 6956.898: 54.9250% ( 709) 00:07:47.022 6956.898 - 7007.311: 58.1562% ( 517) 00:07:47.022 7007.311 - 7057.723: 60.7437% ( 414) 00:07:47.022 7057.723 - 7108.135: 62.8937% ( 344) 00:07:47.022 7108.135 - 7158.548: 64.3063% ( 226) 00:07:47.022 7158.548 - 7208.960: 65.5375% ( 197) 00:07:47.022 7208.960 - 7259.372: 67.6125% ( 332) 00:07:47.022 7259.372 - 7309.785: 68.5312% ( 147) 00:07:47.022 7309.785 - 7360.197: 69.3063% ( 124) 00:07:47.022 7360.197 - 7410.609: 70.1188% ( 130) 00:07:47.022 7410.609 - 7461.022: 70.8438% ( 116) 00:07:47.022 7461.022 - 7511.434: 71.4437% ( 96) 00:07:47.022 7511.434 - 7561.846: 71.8688% ( 68) 00:07:47.022 7561.846 - 7612.258: 72.3750% ( 81) 00:07:47.022 7612.258 - 7662.671: 72.7062% ( 53) 00:07:47.022 7662.671 - 7713.083: 73.1813% ( 76) 00:07:47.022 7713.083 - 7763.495: 73.6813% ( 80) 00:07:47.022 7763.495 - 7813.908: 74.0812% ( 64) 00:07:47.022 7813.908 - 7864.320: 74.4750% ( 63) 00:07:47.022 7864.320 - 7914.732: 74.8563% ( 61) 00:07:47.022 7914.732 - 7965.145: 75.2625% ( 65) 00:07:47.022 7965.145 - 8015.557: 75.6750% ( 66) 00:07:47.022 8015.557 - 8065.969: 76.0687% ( 63) 00:07:47.022 8065.969 - 8116.382: 76.5000% ( 69) 00:07:47.022 8116.382 - 8166.794: 77.0563% ( 89) 00:07:47.022 8166.794 - 8217.206: 77.4750% ( 67) 00:07:47.022 8217.206 - 8267.618: 77.9188% ( 71) 00:07:47.022 8267.618 - 8318.031: 78.3500% ( 69) 00:07:47.022 8318.031 - 8368.443: 78.5875% ( 38) 00:07:47.022 8368.443 - 8418.855: 78.7375% ( 24) 00:07:47.022 8418.855 - 8469.268: 78.8563% ( 19) 00:07:47.022 8469.268 - 8519.680: 78.9938% ( 22) 00:07:47.022 8519.680 - 8570.092: 79.1250% ( 21) 00:07:47.022 8570.092 - 8620.505: 79.2562% ( 21) 00:07:47.022 8620.505 - 8670.917: 79.4313% ( 28) 00:07:47.022 8670.917 - 8721.329: 79.6063% ( 28) 00:07:47.022 8721.329 - 8771.742: 79.7812% ( 28) 00:07:47.022 8771.742 - 8822.154: 79.9688% ( 30) 00:07:47.022 8822.154 - 8872.566: 80.1312% ( 26) 00:07:47.022 8872.566 - 8922.978: 80.2500% ( 19) 00:07:47.022 8922.978 - 8973.391: 80.3750% ( 20) 00:07:47.022 8973.391 - 9023.803: 80.4625% ( 14) 00:07:47.022 9023.803 - 9074.215: 80.5625% ( 16) 00:07:47.022 9074.215 - 9124.628: 80.6875% ( 20) 00:07:47.022 9124.628 - 9175.040: 80.8250% ( 22) 00:07:47.022 9175.040 - 9225.452: 80.9688% ( 23) 00:07:47.022 9225.452 - 9275.865: 81.1188% ( 24) 00:07:47.022 9275.865 - 9326.277: 81.3312% ( 34) 00:07:47.022 9326.277 - 9376.689: 81.5875% ( 41) 00:07:47.022 9376.689 - 9427.102: 81.8812% ( 47) 00:07:47.022 9427.102 - 9477.514: 82.0312% ( 24) 00:07:47.022 9477.514 - 9527.926: 82.1437% ( 18) 00:07:47.022 9527.926 - 9578.338: 82.2562% ( 18) 00:07:47.022 9578.338 - 9628.751: 82.4188% ( 26) 00:07:47.022 9628.751 - 9679.163: 82.6500% ( 37) 00:07:47.022 9679.163 - 9729.575: 82.8875% ( 38) 00:07:47.022 9729.575 - 9779.988: 83.2687% ( 61) 00:07:47.022 9779.988 - 9830.400: 83.5625% ( 47) 00:07:47.022 9830.400 - 9880.812: 83.7375% ( 28) 00:07:47.022 9880.812 - 9931.225: 83.9750% ( 38) 00:07:47.022 9931.225 - 9981.637: 84.1312% ( 25) 00:07:47.022 9981.637 - 10032.049: 84.2938% ( 26) 00:07:47.022 10032.049 - 10082.462: 84.3937% ( 16) 00:07:47.022 10082.462 - 10132.874: 84.5250% ( 21) 00:07:47.022 10132.874 - 10183.286: 84.7000% ( 28) 00:07:47.022 10183.286 - 10233.698: 84.8937% ( 31) 00:07:47.022 10233.698 - 10284.111: 84.9813% ( 14) 00:07:47.022 10284.111 - 10334.523: 85.0500% ( 11) 00:07:47.022 10334.523 - 10384.935: 85.1875% ( 22) 00:07:47.022 10384.935 - 10435.348: 85.3375% ( 24) 00:07:47.022 10435.348 - 10485.760: 85.5000% ( 26) 00:07:47.022 10485.760 - 10536.172: 85.6937% ( 31) 00:07:47.022 10536.172 - 10586.585: 85.9313% ( 38) 00:07:47.022 10586.585 - 10636.997: 86.0312% ( 16) 00:07:47.022 10636.997 - 10687.409: 86.1312% ( 16) 00:07:47.022 10687.409 - 10737.822: 86.2938% ( 26) 00:07:47.022 10737.822 - 10788.234: 86.4437% ( 24) 00:07:47.022 10788.234 - 10838.646: 86.6125% ( 27) 00:07:47.022 10838.646 - 10889.058: 86.8187% ( 33) 00:07:47.022 10889.058 - 10939.471: 87.0250% ( 33) 00:07:47.022 10939.471 - 10989.883: 87.2000% ( 28) 00:07:47.022 10989.883 - 11040.295: 87.3812% ( 29) 00:07:47.022 11040.295 - 11090.708: 87.5750% ( 31) 00:07:47.022 11090.708 - 11141.120: 87.8250% ( 40) 00:07:47.022 11141.120 - 11191.532: 88.0062% ( 29) 00:07:47.022 11191.532 - 11241.945: 88.1562% ( 24) 00:07:47.022 11241.945 - 11292.357: 88.2875% ( 21) 00:07:47.022 11292.357 - 11342.769: 88.4625% ( 28) 00:07:47.022 11342.769 - 11393.182: 88.6125% ( 24) 00:07:47.022 11393.182 - 11443.594: 88.7875% ( 28) 00:07:47.022 11443.594 - 11494.006: 88.9375% ( 24) 00:07:47.022 11494.006 - 11544.418: 89.1188% ( 29) 00:07:47.022 11544.418 - 11594.831: 89.2625% ( 23) 00:07:47.022 11594.831 - 11645.243: 89.4188% ( 25) 00:07:47.022 11645.243 - 11695.655: 89.6125% ( 31) 00:07:47.022 11695.655 - 11746.068: 89.7625% ( 24) 00:07:47.022 11746.068 - 11796.480: 89.9000% ( 22) 00:07:47.022 11796.480 - 11846.892: 90.1250% ( 36) 00:07:47.022 11846.892 - 11897.305: 90.3750% ( 40) 00:07:47.022 11897.305 - 11947.717: 90.6437% ( 43) 00:07:47.022 11947.717 - 11998.129: 90.9875% ( 55) 00:07:47.022 11998.129 - 12048.542: 91.2000% ( 34) 00:07:47.022 12048.542 - 12098.954: 91.3937% ( 31) 00:07:47.022 12098.954 - 12149.366: 91.5938% ( 32) 00:07:47.022 12149.366 - 12199.778: 91.7250% ( 21) 00:07:47.022 12199.778 - 12250.191: 91.8750% ( 24) 00:07:47.022 12250.191 - 12300.603: 92.0375% ( 26) 00:07:47.022 12300.603 - 12351.015: 92.2062% ( 27) 00:07:47.022 12351.015 - 12401.428: 92.4125% ( 33) 00:07:47.022 12401.428 - 12451.840: 92.6312% ( 35) 00:07:47.022 12451.840 - 12502.252: 92.8375% ( 33) 00:07:47.022 12502.252 - 12552.665: 92.9875% ( 24) 00:07:47.022 12552.665 - 12603.077: 93.1562% ( 27) 00:07:47.022 12603.077 - 12653.489: 93.3125% ( 25) 00:07:47.022 12653.489 - 12703.902: 93.4688% ( 25) 00:07:47.022 12703.902 - 12754.314: 93.5938% ( 20) 00:07:47.022 12754.314 - 12804.726: 93.7313% ( 22) 00:07:47.022 12804.726 - 12855.138: 93.9313% ( 32) 00:07:47.022 12855.138 - 12905.551: 94.1562% ( 36) 00:07:47.022 12905.551 - 13006.375: 94.5250% ( 59) 00:07:47.022 13006.375 - 13107.200: 94.8312% ( 49) 00:07:47.022 13107.200 - 13208.025: 95.1125% ( 45) 00:07:47.022 13208.025 - 13308.849: 95.3812% ( 43) 00:07:47.022 13308.849 - 13409.674: 95.7812% ( 64) 00:07:47.022 13409.674 - 13510.498: 96.1875% ( 65) 00:07:47.022 13510.498 - 13611.323: 96.4750% ( 46) 00:07:47.022 13611.323 - 13712.148: 96.7000% ( 36) 00:07:47.022 13712.148 - 13812.972: 96.9000% ( 32) 00:07:47.022 13812.972 - 13913.797: 97.0500% ( 24) 00:07:47.022 13913.797 - 14014.622: 97.2812% ( 37) 00:07:47.022 14014.622 - 14115.446: 97.5250% ( 39) 00:07:47.022 14115.446 - 14216.271: 97.8625% ( 54) 00:07:47.022 14216.271 - 14317.095: 97.9813% ( 19) 00:07:47.022 14317.095 - 14417.920: 98.1000% ( 19) 00:07:47.022 14417.920 - 14518.745: 98.1625% ( 10) 00:07:47.023 14518.745 - 14619.569: 98.2687% ( 17) 00:07:47.023 14619.569 - 14720.394: 98.3438% ( 12) 00:07:47.023 14720.394 - 14821.218: 98.3937% ( 8) 00:07:47.023 14821.218 - 14922.043: 98.4500% ( 9) 00:07:47.023 14922.043 - 15022.868: 98.5125% ( 10) 00:07:47.023 15022.868 - 15123.692: 98.7188% ( 33) 00:07:47.023 15123.692 - 15224.517: 98.7562% ( 6) 00:07:47.023 15224.517 - 15325.342: 98.7938% ( 6) 00:07:47.023 15325.342 - 15426.166: 98.8000% ( 1) 00:07:47.023 15426.166 - 15526.991: 98.8063% ( 1) 00:07:47.023 15526.991 - 15627.815: 98.8312% ( 4) 00:07:47.023 15627.815 - 15728.640: 98.8500% ( 3) 00:07:47.023 15728.640 - 15829.465: 98.8812% ( 5) 00:07:47.023 15829.465 - 15930.289: 98.9125% ( 5) 00:07:47.023 15930.289 - 16031.114: 98.9375% ( 4) 00:07:47.023 16031.114 - 16131.938: 98.9750% ( 6) 00:07:47.023 16131.938 - 16232.763: 99.0438% ( 11) 00:07:47.023 16232.763 - 16333.588: 99.1375% ( 15) 00:07:47.023 16333.588 - 16434.412: 99.1500% ( 2) 00:07:47.023 16434.412 - 16535.237: 99.1688% ( 3) 00:07:47.023 16535.237 - 16636.062: 99.1813% ( 2) 00:07:47.023 16636.062 - 16736.886: 99.2000% ( 3) 00:07:47.023 20669.046 - 20769.871: 99.2188% ( 3) 00:07:47.023 20769.871 - 20870.695: 99.2375% ( 3) 00:07:47.023 20870.695 - 20971.520: 99.2625% ( 4) 00:07:47.023 20971.520 - 21072.345: 99.2812% ( 3) 00:07:47.023 21072.345 - 21173.169: 99.3000% ( 3) 00:07:47.023 21173.169 - 21273.994: 99.3250% ( 4) 00:07:47.023 21273.994 - 21374.818: 99.3500% ( 4) 00:07:47.023 21374.818 - 21475.643: 99.3688% ( 3) 00:07:47.023 21475.643 - 21576.468: 99.3937% ( 4) 00:07:47.023 21576.468 - 21677.292: 99.4188% ( 4) 00:07:47.023 21677.292 - 21778.117: 99.4375% ( 3) 00:07:47.023 21778.117 - 21878.942: 99.4625% ( 4) 00:07:47.023 21878.942 - 21979.766: 99.4813% ( 3) 00:07:47.023 21979.766 - 22080.591: 99.5062% ( 4) 00:07:47.023 22080.591 - 22181.415: 99.5312% ( 4) 00:07:47.023 22181.415 - 22282.240: 99.5563% ( 4) 00:07:47.023 22282.240 - 22383.065: 99.5750% ( 3) 00:07:47.023 22383.065 - 22483.889: 99.6000% ( 4) 00:07:47.023 25407.803 - 25508.628: 99.6188% ( 3) 00:07:47.023 25508.628 - 25609.452: 99.6437% ( 4) 00:07:47.023 25609.452 - 25710.277: 99.6625% ( 3) 00:07:47.023 25710.277 - 25811.102: 99.6813% ( 3) 00:07:47.023 25811.102 - 26012.751: 99.7313% ( 8) 00:07:47.023 26012.751 - 26214.400: 99.7750% ( 7) 00:07:47.023 26214.400 - 26416.049: 99.8250% ( 8) 00:07:47.023 26416.049 - 26617.698: 99.8688% ( 7) 00:07:47.023 26617.698 - 26819.348: 99.9125% ( 7) 00:07:47.023 26819.348 - 27020.997: 99.9625% ( 8) 00:07:47.023 27020.997 - 27222.646: 100.0000% ( 6) 00:07:47.023 00:07:47.023 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:47.023 ============================================================================== 00:07:47.023 Range in us Cumulative IO count 00:07:47.023 6049.477 - 6074.683: 0.0312% ( 5) 00:07:47.023 6074.683 - 6099.889: 0.0813% ( 8) 00:07:47.023 6099.889 - 6125.095: 0.1250% ( 7) 00:07:47.023 6125.095 - 6150.302: 0.2000% ( 12) 00:07:47.023 6150.302 - 6175.508: 0.2750% ( 12) 00:07:47.023 6175.508 - 6200.714: 0.3812% ( 17) 00:07:47.023 6200.714 - 6225.920: 0.8000% ( 67) 00:07:47.023 6225.920 - 6251.126: 1.1625% ( 58) 00:07:47.023 6251.126 - 6276.332: 1.3438% ( 29) 00:07:47.023 6276.332 - 6301.538: 1.4937% ( 24) 00:07:47.023 6301.538 - 6326.745: 1.7563% ( 42) 00:07:47.023 6326.745 - 6351.951: 2.1187% ( 58) 00:07:47.023 6351.951 - 6377.157: 2.5875% ( 75) 00:07:47.023 6377.157 - 6402.363: 3.4438% ( 137) 00:07:47.023 6402.363 - 6427.569: 3.9750% ( 85) 00:07:47.023 6427.569 - 6452.775: 4.7250% ( 120) 00:07:47.023 6452.775 - 6503.188: 6.2750% ( 248) 00:07:47.023 6503.188 - 6553.600: 8.5875% ( 370) 00:07:47.023 6553.600 - 6604.012: 12.4750% ( 622) 00:07:47.023 6604.012 - 6654.425: 17.3125% ( 774) 00:07:47.023 6654.425 - 6704.837: 24.1938% ( 1101) 00:07:47.023 6704.837 - 6755.249: 32.3563% ( 1306) 00:07:47.023 6755.249 - 6805.662: 40.4875% ( 1301) 00:07:47.023 6805.662 - 6856.074: 46.5750% ( 974) 00:07:47.023 6856.074 - 6906.486: 51.4813% ( 785) 00:07:47.023 6906.486 - 6956.898: 55.4813% ( 640) 00:07:47.023 6956.898 - 7007.311: 58.4625% ( 477) 00:07:47.023 7007.311 - 7057.723: 61.1375% ( 428) 00:07:47.023 7057.723 - 7108.135: 62.8625% ( 276) 00:07:47.023 7108.135 - 7158.548: 64.4938% ( 261) 00:07:47.023 7158.548 - 7208.960: 65.6063% ( 178) 00:07:47.023 7208.960 - 7259.372: 67.0563% ( 232) 00:07:47.023 7259.372 - 7309.785: 68.3500% ( 207) 00:07:47.023 7309.785 - 7360.197: 69.5438% ( 191) 00:07:47.023 7360.197 - 7410.609: 70.2687% ( 116) 00:07:47.023 7410.609 - 7461.022: 70.9375% ( 107) 00:07:47.023 7461.022 - 7511.434: 71.5625% ( 100) 00:07:47.023 7511.434 - 7561.846: 72.1875% ( 100) 00:07:47.023 7561.846 - 7612.258: 72.7000% ( 82) 00:07:47.023 7612.258 - 7662.671: 73.0875% ( 62) 00:07:47.023 7662.671 - 7713.083: 73.4188% ( 53) 00:07:47.023 7713.083 - 7763.495: 73.8375% ( 67) 00:07:47.023 7763.495 - 7813.908: 74.2812% ( 71) 00:07:47.023 7813.908 - 7864.320: 74.7625% ( 77) 00:07:47.023 7864.320 - 7914.732: 75.3688% ( 97) 00:07:47.023 7914.732 - 7965.145: 75.6875% ( 51) 00:07:47.023 7965.145 - 8015.557: 76.0812% ( 63) 00:07:47.023 8015.557 - 8065.969: 76.3000% ( 35) 00:07:47.023 8065.969 - 8116.382: 76.6312% ( 53) 00:07:47.023 8116.382 - 8166.794: 76.8375% ( 33) 00:07:47.023 8166.794 - 8217.206: 77.0625% ( 36) 00:07:47.023 8217.206 - 8267.618: 77.4813% ( 67) 00:07:47.023 8267.618 - 8318.031: 77.8500% ( 59) 00:07:47.023 8318.031 - 8368.443: 78.1125% ( 42) 00:07:47.023 8368.443 - 8418.855: 78.5438% ( 69) 00:07:47.023 8418.855 - 8469.268: 78.7750% ( 37) 00:07:47.023 8469.268 - 8519.680: 78.9313% ( 25) 00:07:47.023 8519.680 - 8570.092: 79.0625% ( 21) 00:07:47.023 8570.092 - 8620.505: 79.1875% ( 20) 00:07:47.023 8620.505 - 8670.917: 79.3125% ( 20) 00:07:47.023 8670.917 - 8721.329: 79.4500% ( 22) 00:07:47.023 8721.329 - 8771.742: 79.6000% ( 24) 00:07:47.023 8771.742 - 8822.154: 79.6937% ( 15) 00:07:47.023 8822.154 - 8872.566: 79.8063% ( 18) 00:07:47.023 8872.566 - 8922.978: 79.9125% ( 17) 00:07:47.023 8922.978 - 8973.391: 80.0125% ( 16) 00:07:47.023 8973.391 - 9023.803: 80.1188% ( 17) 00:07:47.023 9023.803 - 9074.215: 80.2750% ( 25) 00:07:47.023 9074.215 - 9124.628: 80.5125% ( 38) 00:07:47.023 9124.628 - 9175.040: 80.7188% ( 33) 00:07:47.023 9175.040 - 9225.452: 80.8812% ( 26) 00:07:47.023 9225.452 - 9275.865: 81.1000% ( 35) 00:07:47.023 9275.865 - 9326.277: 81.3250% ( 36) 00:07:47.023 9326.277 - 9376.689: 81.6063% ( 45) 00:07:47.023 9376.689 - 9427.102: 81.9625% ( 57) 00:07:47.023 9427.102 - 9477.514: 82.2438% ( 45) 00:07:47.023 9477.514 - 9527.926: 82.5250% ( 45) 00:07:47.023 9527.926 - 9578.338: 82.7062% ( 29) 00:07:47.023 9578.338 - 9628.751: 82.8937% ( 30) 00:07:47.023 9628.751 - 9679.163: 83.1188% ( 36) 00:07:47.023 9679.163 - 9729.575: 83.3000% ( 29) 00:07:47.023 9729.575 - 9779.988: 83.4500% ( 24) 00:07:47.023 9779.988 - 9830.400: 83.6188% ( 27) 00:07:47.023 9830.400 - 9880.812: 83.7938% ( 28) 00:07:47.023 9880.812 - 9931.225: 83.9813% ( 30) 00:07:47.023 9931.225 - 9981.637: 84.1312% ( 24) 00:07:47.023 9981.637 - 10032.049: 84.2812% ( 24) 00:07:47.023 10032.049 - 10082.462: 84.3937% ( 18) 00:07:47.023 10082.462 - 10132.874: 84.5125% ( 19) 00:07:47.023 10132.874 - 10183.286: 84.6188% ( 17) 00:07:47.023 10183.286 - 10233.698: 84.6937% ( 12) 00:07:47.023 10233.698 - 10284.111: 84.9375% ( 39) 00:07:47.023 10284.111 - 10334.523: 85.0250% ( 14) 00:07:47.023 10334.523 - 10384.935: 85.0875% ( 10) 00:07:47.023 10384.935 - 10435.348: 85.1562% ( 11) 00:07:47.023 10435.348 - 10485.760: 85.2188% ( 10) 00:07:47.023 10485.760 - 10536.172: 85.2750% ( 9) 00:07:47.023 10536.172 - 10586.585: 85.3187% ( 7) 00:07:47.023 10586.585 - 10636.997: 85.3812% ( 10) 00:07:47.023 10636.997 - 10687.409: 85.4688% ( 14) 00:07:47.023 10687.409 - 10737.822: 85.5812% ( 18) 00:07:47.023 10737.822 - 10788.234: 85.7188% ( 22) 00:07:47.023 10788.234 - 10838.646: 85.9188% ( 32) 00:07:47.023 10838.646 - 10889.058: 86.1937% ( 44) 00:07:47.023 10889.058 - 10939.471: 86.3312% ( 22) 00:07:47.023 10939.471 - 10989.883: 86.4875% ( 25) 00:07:47.023 10989.883 - 11040.295: 86.6813% ( 31) 00:07:47.023 11040.295 - 11090.708: 86.9000% ( 35) 00:07:47.023 11090.708 - 11141.120: 87.1250% ( 36) 00:07:47.023 11141.120 - 11191.532: 87.3937% ( 43) 00:07:47.023 11191.532 - 11241.945: 87.7250% ( 53) 00:07:47.023 11241.945 - 11292.357: 88.0312% ( 49) 00:07:47.023 11292.357 - 11342.769: 88.2687% ( 38) 00:07:47.023 11342.769 - 11393.182: 88.5125% ( 39) 00:07:47.023 11393.182 - 11443.594: 88.7313% ( 35) 00:07:47.023 11443.594 - 11494.006: 88.9562% ( 36) 00:07:47.023 11494.006 - 11544.418: 89.1750% ( 35) 00:07:47.023 11544.418 - 11594.831: 89.3937% ( 35) 00:07:47.023 11594.831 - 11645.243: 89.5938% ( 32) 00:07:47.023 11645.243 - 11695.655: 89.8125% ( 35) 00:07:47.023 11695.655 - 11746.068: 90.0938% ( 45) 00:07:47.023 11746.068 - 11796.480: 90.3187% ( 36) 00:07:47.023 11796.480 - 11846.892: 90.6875% ( 59) 00:07:47.023 11846.892 - 11897.305: 90.8875% ( 32) 00:07:47.023 11897.305 - 11947.717: 91.0938% ( 33) 00:07:47.023 11947.717 - 11998.129: 91.3187% ( 36) 00:07:47.023 11998.129 - 12048.542: 91.5125% ( 31) 00:07:47.023 12048.542 - 12098.954: 91.6937% ( 29) 00:07:47.024 12098.954 - 12149.366: 91.8250% ( 21) 00:07:47.024 12149.366 - 12199.778: 91.9437% ( 19) 00:07:47.024 12199.778 - 12250.191: 92.0938% ( 24) 00:07:47.024 12250.191 - 12300.603: 92.2562% ( 26) 00:07:47.024 12300.603 - 12351.015: 92.4188% ( 26) 00:07:47.024 12351.015 - 12401.428: 92.6063% ( 30) 00:07:47.024 12401.428 - 12451.840: 92.9188% ( 50) 00:07:47.024 12451.840 - 12502.252: 93.1813% ( 42) 00:07:47.024 12502.252 - 12552.665: 93.3750% ( 31) 00:07:47.024 12552.665 - 12603.077: 93.5812% ( 33) 00:07:47.024 12603.077 - 12653.489: 93.7562% ( 28) 00:07:47.024 12653.489 - 12703.902: 93.8875% ( 21) 00:07:47.024 12703.902 - 12754.314: 94.0563% ( 27) 00:07:47.024 12754.314 - 12804.726: 94.2500% ( 31) 00:07:47.024 12804.726 - 12855.138: 94.4250% ( 28) 00:07:47.024 12855.138 - 12905.551: 94.6250% ( 32) 00:07:47.024 12905.551 - 13006.375: 94.9125% ( 46) 00:07:47.024 13006.375 - 13107.200: 95.1688% ( 41) 00:07:47.024 13107.200 - 13208.025: 95.3500% ( 29) 00:07:47.024 13208.025 - 13308.849: 95.5125% ( 26) 00:07:47.024 13308.849 - 13409.674: 95.7000% ( 30) 00:07:47.024 13409.674 - 13510.498: 95.9062% ( 33) 00:07:47.024 13510.498 - 13611.323: 96.3500% ( 71) 00:07:47.024 13611.323 - 13712.148: 96.5187% ( 27) 00:07:47.024 13712.148 - 13812.972: 96.6562% ( 22) 00:07:47.024 13812.972 - 13913.797: 96.8187% ( 26) 00:07:47.024 13913.797 - 14014.622: 96.9313% ( 18) 00:07:47.024 14014.622 - 14115.446: 97.0938% ( 26) 00:07:47.024 14115.446 - 14216.271: 97.3500% ( 41) 00:07:47.024 14216.271 - 14317.095: 97.5250% ( 28) 00:07:47.024 14317.095 - 14417.920: 97.7812% ( 41) 00:07:47.024 14417.920 - 14518.745: 97.9437% ( 26) 00:07:47.024 14518.745 - 14619.569: 98.0250% ( 13) 00:07:47.024 14619.569 - 14720.394: 98.1188% ( 15) 00:07:47.024 14720.394 - 14821.218: 98.2188% ( 16) 00:07:47.024 14821.218 - 14922.043: 98.3000% ( 13) 00:07:47.024 14922.043 - 15022.868: 98.4313% ( 21) 00:07:47.024 15022.868 - 15123.692: 98.7188% ( 46) 00:07:47.024 15123.692 - 15224.517: 98.7875% ( 11) 00:07:47.024 15224.517 - 15325.342: 98.8000% ( 2) 00:07:47.024 15627.815 - 15728.640: 98.8312% ( 5) 00:07:47.024 15728.640 - 15829.465: 98.9562% ( 20) 00:07:47.024 15829.465 - 15930.289: 99.0625% ( 17) 00:07:47.024 15930.289 - 16031.114: 99.0875% ( 4) 00:07:47.024 16031.114 - 16131.938: 99.1125% ( 4) 00:07:47.024 16131.938 - 16232.763: 99.1375% ( 4) 00:07:47.024 16232.763 - 16333.588: 99.1562% ( 3) 00:07:47.024 16333.588 - 16434.412: 99.1875% ( 5) 00:07:47.024 16434.412 - 16535.237: 99.2000% ( 2) 00:07:47.024 18551.729 - 18652.554: 99.2188% ( 3) 00:07:47.024 18652.554 - 18753.378: 99.2438% ( 4) 00:07:47.024 18753.378 - 18854.203: 99.2625% ( 3) 00:07:47.024 18854.203 - 18955.028: 99.2812% ( 3) 00:07:47.024 18955.028 - 19055.852: 99.3063% ( 4) 00:07:47.024 19055.852 - 19156.677: 99.3312% ( 4) 00:07:47.024 19156.677 - 19257.502: 99.3500% ( 3) 00:07:47.024 19257.502 - 19358.326: 99.3750% ( 4) 00:07:47.024 19358.326 - 19459.151: 99.4000% ( 4) 00:07:47.024 19459.151 - 19559.975: 99.4188% ( 3) 00:07:47.024 19559.975 - 19660.800: 99.4437% ( 4) 00:07:47.024 19660.800 - 19761.625: 99.4688% ( 4) 00:07:47.024 19761.625 - 19862.449: 99.4875% ( 3) 00:07:47.024 19862.449 - 19963.274: 99.5125% ( 4) 00:07:47.024 19963.274 - 20064.098: 99.5375% ( 4) 00:07:47.024 20064.098 - 20164.923: 99.5563% ( 3) 00:07:47.024 20164.923 - 20265.748: 99.5812% ( 4) 00:07:47.024 20265.748 - 20366.572: 99.6000% ( 3) 00:07:47.024 23290.486 - 23391.311: 99.6125% ( 2) 00:07:47.024 23391.311 - 23492.135: 99.6375% ( 4) 00:07:47.024 23492.135 - 23592.960: 99.6562% ( 3) 00:07:47.024 23592.960 - 23693.785: 99.6750% ( 3) 00:07:47.024 23693.785 - 23794.609: 99.7000% ( 4) 00:07:47.024 23794.609 - 23895.434: 99.7188% ( 3) 00:07:47.024 23895.434 - 23996.258: 99.7375% ( 3) 00:07:47.024 23996.258 - 24097.083: 99.7625% ( 4) 00:07:47.024 24097.083 - 24197.908: 99.7875% ( 4) 00:07:47.024 24197.908 - 24298.732: 99.8063% ( 3) 00:07:47.024 24298.732 - 24399.557: 99.8312% ( 4) 00:07:47.024 24399.557 - 24500.382: 99.8500% ( 3) 00:07:47.024 24500.382 - 24601.206: 99.8750% ( 4) 00:07:47.024 24601.206 - 24702.031: 99.9000% ( 4) 00:07:47.024 24702.031 - 24802.855: 99.9188% ( 3) 00:07:47.024 24802.855 - 24903.680: 99.9437% ( 4) 00:07:47.024 24903.680 - 25004.505: 99.9688% ( 4) 00:07:47.024 25004.505 - 25105.329: 99.9938% ( 4) 00:07:47.024 25105.329 - 25206.154: 100.0000% ( 1) 00:07:47.024 00:07:47.024 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:47.024 ============================================================================== 00:07:47.024 Range in us Cumulative IO count 00:07:47.024 5923.446 - 5948.652: 0.0062% ( 1) 00:07:47.024 5973.858 - 5999.065: 0.0125% ( 1) 00:07:47.024 6074.683 - 6099.889: 0.0623% ( 8) 00:07:47.024 6099.889 - 6125.095: 0.0872% ( 4) 00:07:47.024 6125.095 - 6150.302: 0.1432% ( 9) 00:07:47.024 6150.302 - 6175.508: 0.2428% ( 16) 00:07:47.024 6175.508 - 6200.714: 0.4482% ( 33) 00:07:47.024 6200.714 - 6225.920: 0.5665% ( 19) 00:07:47.024 6225.920 - 6251.126: 0.8964% ( 53) 00:07:47.024 6251.126 - 6276.332: 1.1579% ( 42) 00:07:47.024 6276.332 - 6301.538: 1.4255% ( 43) 00:07:47.024 6301.538 - 6326.745: 1.8613% ( 70) 00:07:47.024 6326.745 - 6351.951: 2.1601% ( 48) 00:07:47.024 6351.951 - 6377.157: 2.7390% ( 93) 00:07:47.024 6377.157 - 6402.363: 3.5794% ( 135) 00:07:47.024 6402.363 - 6427.569: 4.2082% ( 101) 00:07:47.024 6427.569 - 6452.775: 4.7498% ( 87) 00:07:47.024 6452.775 - 6503.188: 6.0632% ( 211) 00:07:47.024 6503.188 - 6553.600: 8.8085% ( 441) 00:07:47.024 6553.600 - 6604.012: 12.5436% ( 600) 00:07:47.024 6604.012 - 6654.425: 17.0132% ( 718) 00:07:47.024 6654.425 - 6704.837: 23.8484% ( 1098) 00:07:47.024 6704.837 - 6755.249: 32.7938% ( 1437) 00:07:47.024 6755.249 - 6805.662: 40.1457% ( 1181) 00:07:47.024 6805.662 - 6856.074: 45.0946% ( 795) 00:07:47.024 6856.074 - 6906.486: 50.7159% ( 903) 00:07:47.024 6906.486 - 6956.898: 55.4345% ( 758) 00:07:47.024 6956.898 - 7007.311: 57.9495% ( 404) 00:07:47.024 7007.311 - 7057.723: 60.6325% ( 431) 00:07:47.024 7057.723 - 7108.135: 62.1763% ( 248) 00:07:47.024 7108.135 - 7158.548: 63.9380% ( 283) 00:07:47.024 7158.548 - 7208.960: 64.9714% ( 166) 00:07:47.024 7208.960 - 7259.372: 66.2849% ( 211) 00:07:47.024 7259.372 - 7309.785: 67.7353% ( 233) 00:07:47.024 7309.785 - 7360.197: 69.6153% ( 302) 00:07:47.024 7360.197 - 7410.609: 70.4806% ( 139) 00:07:47.024 7410.609 - 7461.022: 71.8314% ( 217) 00:07:47.024 7461.022 - 7511.434: 72.6407% ( 130) 00:07:47.024 7511.434 - 7561.846: 73.0889% ( 72) 00:07:47.024 7561.846 - 7612.258: 73.5869% ( 80) 00:07:47.024 7612.258 - 7662.671: 73.9106% ( 52) 00:07:47.024 7662.671 - 7713.083: 74.3152% ( 65) 00:07:47.024 7713.083 - 7763.495: 74.6140% ( 48) 00:07:47.024 7763.495 - 7813.908: 75.0062% ( 63) 00:07:47.024 7813.908 - 7864.320: 75.2677% ( 42) 00:07:47.024 7864.320 - 7914.732: 75.5354% ( 43) 00:07:47.024 7914.732 - 7965.145: 76.0707% ( 86) 00:07:47.024 7965.145 - 8015.557: 76.4380% ( 59) 00:07:47.024 8015.557 - 8065.969: 76.6248% ( 30) 00:07:47.024 8065.969 - 8116.382: 76.7928% ( 27) 00:07:47.024 8116.382 - 8166.794: 76.8862% ( 15) 00:07:47.024 8166.794 - 8217.206: 77.0232% ( 22) 00:07:47.024 8217.206 - 8267.618: 77.1912% ( 27) 00:07:47.024 8267.618 - 8318.031: 77.3718% ( 29) 00:07:47.024 8318.031 - 8368.443: 77.8013% ( 69) 00:07:47.024 8368.443 - 8418.855: 78.0005% ( 32) 00:07:47.024 8418.855 - 8469.268: 78.4176% ( 67) 00:07:47.024 8469.268 - 8519.680: 78.6355% ( 35) 00:07:47.024 8519.680 - 8570.092: 78.8596% ( 36) 00:07:47.024 8570.092 - 8620.505: 79.0961% ( 38) 00:07:47.024 8620.505 - 8670.917: 79.3825% ( 46) 00:07:47.024 8670.917 - 8721.329: 79.7684% ( 62) 00:07:47.024 8721.329 - 8771.742: 79.9241% ( 25) 00:07:47.024 8771.742 - 8822.154: 80.0984% ( 28) 00:07:47.024 8822.154 - 8872.566: 80.2789% ( 29) 00:07:47.024 8872.566 - 8922.978: 80.4781% ( 32) 00:07:47.024 8922.978 - 8973.391: 80.6586% ( 29) 00:07:47.024 8973.391 - 9023.803: 80.7831% ( 20) 00:07:47.024 9023.803 - 9074.215: 80.9325% ( 24) 00:07:47.024 9074.215 - 9124.628: 81.0570% ( 20) 00:07:47.024 9124.628 - 9175.040: 81.1691% ( 18) 00:07:47.024 9175.040 - 9225.452: 81.3123% ( 23) 00:07:47.024 9225.452 - 9275.865: 81.4928% ( 29) 00:07:47.024 9275.865 - 9326.277: 81.7356% ( 39) 00:07:47.024 9326.277 - 9376.689: 82.0717% ( 54) 00:07:47.024 9376.689 - 9427.102: 82.3643% ( 47) 00:07:47.024 9427.102 - 9477.514: 82.6195% ( 41) 00:07:47.024 9477.514 - 9527.926: 82.9059% ( 46) 00:07:47.024 9527.926 - 9578.338: 83.1300% ( 36) 00:07:47.024 9578.338 - 9628.751: 83.3541% ( 36) 00:07:47.024 9628.751 - 9679.163: 83.4724% ( 19) 00:07:47.024 9679.163 - 9729.575: 83.5844% ( 18) 00:07:47.024 9729.575 - 9779.988: 83.6467% ( 10) 00:07:47.024 9779.988 - 9830.400: 83.7089% ( 10) 00:07:47.024 9830.400 - 9880.812: 83.7774% ( 11) 00:07:47.024 9880.812 - 9931.225: 83.8459% ( 11) 00:07:47.024 9931.225 - 9981.637: 83.8957% ( 8) 00:07:47.024 9981.637 - 10032.049: 83.9579% ( 10) 00:07:47.024 10032.049 - 10082.462: 84.0326% ( 12) 00:07:47.024 10082.462 - 10132.874: 84.1571% ( 20) 00:07:47.024 10132.874 - 10183.286: 84.3003% ( 23) 00:07:47.024 10183.286 - 10233.698: 84.4310% ( 21) 00:07:47.024 10233.698 - 10284.111: 84.5555% ( 20) 00:07:47.024 10284.111 - 10334.523: 84.7734% ( 35) 00:07:47.024 10334.523 - 10384.935: 84.8979% ( 20) 00:07:47.024 10384.935 - 10435.348: 84.9664% ( 11) 00:07:47.025 10435.348 - 10485.760: 85.0224% ( 9) 00:07:47.025 10485.760 - 10536.172: 85.0909% ( 11) 00:07:47.025 10536.172 - 10586.585: 85.1843% ( 15) 00:07:47.025 10586.585 - 10636.997: 85.3088% ( 20) 00:07:47.025 10636.997 - 10687.409: 85.4706% ( 26) 00:07:47.025 10687.409 - 10737.822: 85.6947% ( 36) 00:07:47.025 10737.822 - 10788.234: 85.8566% ( 26) 00:07:47.025 10788.234 - 10838.646: 86.1118% ( 41) 00:07:47.025 10838.646 - 10889.058: 86.2550% ( 23) 00:07:47.025 10889.058 - 10939.471: 86.4542% ( 32) 00:07:47.025 10939.471 - 10989.883: 86.6036% ( 24) 00:07:47.025 10989.883 - 11040.295: 86.7966% ( 31) 00:07:47.025 11040.295 - 11090.708: 87.0456% ( 40) 00:07:47.025 11090.708 - 11141.120: 87.2883% ( 39) 00:07:47.025 11141.120 - 11191.532: 87.5685% ( 45) 00:07:47.025 11191.532 - 11241.945: 87.7926% ( 36) 00:07:47.025 11241.945 - 11292.357: 88.0665% ( 44) 00:07:47.025 11292.357 - 11342.769: 88.3466% ( 45) 00:07:47.025 11342.769 - 11393.182: 88.6330% ( 46) 00:07:47.025 11393.182 - 11443.594: 88.8633% ( 37) 00:07:47.025 11443.594 - 11494.006: 89.0687% ( 33) 00:07:47.025 11494.006 - 11544.418: 89.2866% ( 35) 00:07:47.025 11544.418 - 11594.831: 89.5294% ( 39) 00:07:47.025 11594.831 - 11645.243: 89.8344% ( 49) 00:07:47.025 11645.243 - 11695.655: 89.9963% ( 26) 00:07:47.025 11695.655 - 11746.068: 90.3386% ( 55) 00:07:47.025 11746.068 - 11796.480: 90.5067% ( 27) 00:07:47.025 11796.480 - 11846.892: 90.7122% ( 33) 00:07:47.025 11846.892 - 11897.305: 91.0047% ( 47) 00:07:47.025 11897.305 - 11947.717: 91.2413% ( 38) 00:07:47.025 11947.717 - 11998.129: 91.4592% ( 35) 00:07:47.025 11998.129 - 12048.542: 91.6023% ( 23) 00:07:47.025 12048.542 - 12098.954: 91.7144% ( 18) 00:07:47.025 12098.954 - 12149.366: 91.8140% ( 16) 00:07:47.025 12149.366 - 12199.778: 91.9074% ( 15) 00:07:47.025 12199.778 - 12250.191: 91.9758% ( 11) 00:07:47.025 12250.191 - 12300.603: 92.0754% ( 16) 00:07:47.025 12300.603 - 12351.015: 92.1875% ( 18) 00:07:47.025 12351.015 - 12401.428: 92.3307% ( 23) 00:07:47.025 12401.428 - 12451.840: 92.4676% ( 22) 00:07:47.025 12451.840 - 12502.252: 92.5797% ( 18) 00:07:47.025 12502.252 - 12552.665: 92.7042% ( 20) 00:07:47.025 12552.665 - 12603.077: 92.8474% ( 23) 00:07:47.025 12603.077 - 12653.489: 93.0279% ( 29) 00:07:47.025 12653.489 - 12703.902: 93.2209% ( 31) 00:07:47.025 12703.902 - 12754.314: 93.4761% ( 41) 00:07:47.025 12754.314 - 12804.726: 93.8247% ( 56) 00:07:47.025 12804.726 - 12855.138: 94.1173% ( 47) 00:07:47.025 12855.138 - 12905.551: 94.3227% ( 33) 00:07:47.025 12905.551 - 13006.375: 94.6713% ( 56) 00:07:47.025 13006.375 - 13107.200: 95.0012% ( 53) 00:07:47.025 13107.200 - 13208.025: 95.3748% ( 60) 00:07:47.025 13208.025 - 13308.849: 95.8914% ( 83) 00:07:47.025 13308.849 - 13409.674: 96.1404% ( 40) 00:07:47.025 13409.674 - 13510.498: 96.3334% ( 31) 00:07:47.025 13510.498 - 13611.323: 96.6758% ( 55) 00:07:47.025 13611.323 - 13712.148: 97.1302% ( 73) 00:07:47.025 13712.148 - 13812.972: 97.2859% ( 25) 00:07:47.025 13812.972 - 13913.797: 97.3917% ( 17) 00:07:47.025 13913.797 - 14014.622: 97.4726% ( 13) 00:07:47.025 14014.622 - 14115.446: 97.5162% ( 7) 00:07:47.025 14115.446 - 14216.271: 97.5598% ( 7) 00:07:47.025 14216.271 - 14317.095: 97.6158% ( 9) 00:07:47.025 14317.095 - 14417.920: 97.6469% ( 5) 00:07:47.025 14417.920 - 14518.745: 97.6843% ( 6) 00:07:47.025 14518.745 - 14619.569: 97.9208% ( 38) 00:07:47.025 14619.569 - 14720.394: 98.0142% ( 15) 00:07:47.025 14720.394 - 14821.218: 98.1325% ( 19) 00:07:47.025 14821.218 - 14922.043: 98.2943% ( 26) 00:07:47.025 14922.043 - 15022.868: 98.4749% ( 29) 00:07:47.025 15022.868 - 15123.692: 98.8733% ( 64) 00:07:47.025 15123.692 - 15224.517: 99.1223% ( 40) 00:07:47.025 15224.517 - 15325.342: 99.2717% ( 24) 00:07:47.025 15325.342 - 15426.166: 99.4460% ( 28) 00:07:47.025 15426.166 - 15526.991: 99.5020% ( 9) 00:07:47.025 15526.991 - 15627.815: 99.5269% ( 4) 00:07:47.025 15627.815 - 15728.640: 99.5518% ( 4) 00:07:47.025 15728.640 - 15829.465: 99.5705% ( 3) 00:07:47.025 15829.465 - 15930.289: 99.6016% ( 5) 00:07:47.025 17745.132 - 17845.957: 99.6203% ( 3) 00:07:47.025 17845.957 - 17946.782: 99.6452% ( 4) 00:07:47.025 17946.782 - 18047.606: 99.6576% ( 2) 00:07:47.025 18047.606 - 18148.431: 99.6825% ( 4) 00:07:47.025 18148.431 - 18249.255: 99.7074% ( 4) 00:07:47.025 18249.255 - 18350.080: 99.7323% ( 4) 00:07:47.025 18350.080 - 18450.905: 99.7510% ( 3) 00:07:47.025 18450.905 - 18551.729: 99.7759% ( 4) 00:07:47.025 18551.729 - 18652.554: 99.8008% ( 4) 00:07:47.025 18652.554 - 18753.378: 99.8257% ( 4) 00:07:47.025 18753.378 - 18854.203: 99.8444% ( 3) 00:07:47.025 18854.203 - 18955.028: 99.8693% ( 4) 00:07:47.025 18955.028 - 19055.852: 99.8942% ( 4) 00:07:47.025 19055.852 - 19156.677: 99.9191% ( 4) 00:07:47.025 19156.677 - 19257.502: 99.9377% ( 3) 00:07:47.025 19257.502 - 19358.326: 99.9626% ( 4) 00:07:47.025 19358.326 - 19459.151: 99.9813% ( 3) 00:07:47.025 19459.151 - 19559.975: 100.0000% ( 3) 00:07:47.025 00:07:47.025 06:31:38 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:47.025 00:07:47.025 real 0m2.530s 00:07:47.025 user 0m2.219s 00:07:47.025 sys 0m0.211s 00:07:47.025 ************************************ 00:07:47.025 END TEST nvme_perf 00:07:47.025 ************************************ 00:07:47.025 06:31:38 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.025 06:31:38 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:47.286 06:31:38 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:47.286 06:31:38 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:47.286 06:31:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.286 06:31:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.286 ************************************ 00:07:47.286 START TEST nvme_hello_world 00:07:47.286 ************************************ 00:07:47.286 06:31:38 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:47.286 Initializing NVMe Controllers 00:07:47.286 Attached to 0000:00:10.0 00:07:47.286 Namespace ID: 1 size: 6GB 00:07:47.286 Attached to 0000:00:11.0 00:07:47.286 Namespace ID: 1 size: 5GB 00:07:47.286 Attached to 0000:00:13.0 00:07:47.286 Namespace ID: 1 size: 1GB 00:07:47.286 Attached to 0000:00:12.0 00:07:47.286 Namespace ID: 1 size: 4GB 00:07:47.286 Namespace ID: 2 size: 4GB 00:07:47.286 Namespace ID: 3 size: 4GB 00:07:47.286 Initialization complete. 00:07:47.286 INFO: using host memory buffer for IO 00:07:47.286 Hello world! 00:07:47.286 INFO: using host memory buffer for IO 00:07:47.286 Hello world! 00:07:47.286 INFO: using host memory buffer for IO 00:07:47.286 Hello world! 00:07:47.286 INFO: using host memory buffer for IO 00:07:47.286 Hello world! 00:07:47.286 INFO: using host memory buffer for IO 00:07:47.286 Hello world! 00:07:47.286 INFO: using host memory buffer for IO 00:07:47.286 Hello world! 00:07:47.547 00:07:47.547 real 0m0.233s 00:07:47.547 user 0m0.086s 00:07:47.547 sys 0m0.102s 00:07:47.547 06:31:39 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.547 06:31:39 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:47.547 ************************************ 00:07:47.547 END TEST nvme_hello_world 00:07:47.547 ************************************ 00:07:47.547 06:31:39 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:47.547 06:31:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.547 06:31:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.547 06:31:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.547 ************************************ 00:07:47.547 START TEST nvme_sgl 00:07:47.547 ************************************ 00:07:47.547 06:31:39 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:47.547 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:47.547 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:47.547 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:47.808 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:47.808 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:47.808 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:47.809 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:47.809 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:47.809 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:47.809 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:47.809 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:47.809 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:47.809 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:47.809 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:47.809 NVMe Readv/Writev Request test 00:07:47.809 Attached to 0000:00:10.0 00:07:47.809 Attached to 0000:00:11.0 00:07:47.809 Attached to 0000:00:13.0 00:07:47.809 Attached to 0000:00:12.0 00:07:47.809 0000:00:10.0: build_io_request_2 test passed 00:07:47.809 0000:00:10.0: build_io_request_4 test passed 00:07:47.809 0000:00:10.0: build_io_request_5 test passed 00:07:47.809 0000:00:10.0: build_io_request_6 test passed 00:07:47.809 0000:00:10.0: build_io_request_7 test passed 00:07:47.809 0000:00:10.0: build_io_request_10 test passed 00:07:47.809 0000:00:11.0: build_io_request_2 test passed 00:07:47.809 0000:00:11.0: build_io_request_4 test passed 00:07:47.809 0000:00:11.0: build_io_request_5 test passed 00:07:47.809 0000:00:11.0: build_io_request_6 test passed 00:07:47.809 0000:00:11.0: build_io_request_7 test passed 00:07:47.809 0000:00:11.0: build_io_request_10 test passed 00:07:47.809 Cleaning up... 00:07:47.809 00:07:47.809 real 0m0.301s 00:07:47.809 user 0m0.154s 00:07:47.809 sys 0m0.103s 00:07:47.809 06:31:39 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.809 ************************************ 00:07:47.809 END TEST nvme_sgl 00:07:47.809 ************************************ 00:07:47.809 06:31:39 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:47.809 06:31:39 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:47.809 06:31:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.809 06:31:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.809 06:31:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.809 ************************************ 00:07:47.809 START TEST nvme_e2edp 00:07:47.809 ************************************ 00:07:47.809 06:31:39 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:48.069 NVMe Write/Read with End-to-End data protection test 00:07:48.069 Attached to 0000:00:10.0 00:07:48.069 Attached to 0000:00:11.0 00:07:48.069 Attached to 0000:00:13.0 00:07:48.069 Attached to 0000:00:12.0 00:07:48.069 Cleaning up... 00:07:48.069 00:07:48.069 real 0m0.218s 00:07:48.070 user 0m0.078s 00:07:48.070 sys 0m0.099s 00:07:48.070 06:31:39 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.070 06:31:39 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 ************************************ 00:07:48.070 END TEST nvme_e2edp 00:07:48.070 ************************************ 00:07:48.070 06:31:39 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:48.070 06:31:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.070 06:31:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.070 06:31:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 ************************************ 00:07:48.070 START TEST nvme_reserve 00:07:48.070 ************************************ 00:07:48.070 06:31:39 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:48.328 ===================================================== 00:07:48.328 NVMe Controller at PCI bus 0, device 16, function 0 00:07:48.328 ===================================================== 00:07:48.328 Reservations: Not Supported 00:07:48.328 ===================================================== 00:07:48.328 NVMe Controller at PCI bus 0, device 17, function 0 00:07:48.328 ===================================================== 00:07:48.328 Reservations: Not Supported 00:07:48.328 ===================================================== 00:07:48.328 NVMe Controller at PCI bus 0, device 19, function 0 00:07:48.328 ===================================================== 00:07:48.328 Reservations: Not Supported 00:07:48.328 ===================================================== 00:07:48.328 NVMe Controller at PCI bus 0, device 18, function 0 00:07:48.328 ===================================================== 00:07:48.328 Reservations: Not Supported 00:07:48.328 Reservation test passed 00:07:48.328 ************************************ 00:07:48.328 END TEST nvme_reserve 00:07:48.328 ************************************ 00:07:48.328 00:07:48.328 real 0m0.222s 00:07:48.328 user 0m0.074s 00:07:48.328 sys 0m0.100s 00:07:48.328 06:31:40 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.328 06:31:40 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:48.328 06:31:40 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:48.328 06:31:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.328 06:31:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.328 06:31:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.328 ************************************ 00:07:48.328 START TEST nvme_err_injection 00:07:48.328 ************************************ 00:07:48.328 06:31:40 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:48.587 NVMe Error Injection test 00:07:48.587 Attached to 0000:00:10.0 00:07:48.587 Attached to 0000:00:11.0 00:07:48.587 Attached to 0000:00:13.0 00:07:48.587 Attached to 0000:00:12.0 00:07:48.587 0000:00:12.0: get features failed as expected 00:07:48.587 0000:00:10.0: get features failed as expected 00:07:48.587 0000:00:11.0: get features failed as expected 00:07:48.587 0000:00:13.0: get features failed as expected 00:07:48.587 0000:00:10.0: get features successfully as expected 00:07:48.587 0000:00:11.0: get features successfully as expected 00:07:48.587 0000:00:13.0: get features successfully as expected 00:07:48.587 0000:00:12.0: get features successfully as expected 00:07:48.587 0000:00:12.0: read failed as expected 00:07:48.587 0000:00:10.0: read failed as expected 00:07:48.587 0000:00:11.0: read failed as expected 00:07:48.587 0000:00:13.0: read failed as expected 00:07:48.587 0000:00:12.0: read successfully as expected 00:07:48.587 0000:00:10.0: read successfully as expected 00:07:48.587 0000:00:11.0: read successfully as expected 00:07:48.587 0000:00:13.0: read successfully as expected 00:07:48.587 Cleaning up... 00:07:48.587 00:07:48.587 real 0m0.230s 00:07:48.587 user 0m0.081s 00:07:48.587 sys 0m0.099s 00:07:48.587 06:31:40 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.588 06:31:40 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:48.588 ************************************ 00:07:48.588 END TEST nvme_err_injection 00:07:48.588 ************************************ 00:07:48.588 06:31:40 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:48.588 06:31:40 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:48.588 06:31:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.588 06:31:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.588 ************************************ 00:07:48.588 START TEST nvme_overhead 00:07:48.588 ************************************ 00:07:48.588 06:31:40 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:49.969 Initializing NVMe Controllers 00:07:49.969 Attached to 0000:00:10.0 00:07:49.969 Attached to 0000:00:11.0 00:07:49.969 Attached to 0000:00:13.0 00:07:49.969 Attached to 0000:00:12.0 00:07:49.969 Initialization complete. Launching workers. 00:07:49.969 submit (in ns) avg, min, max = 12755.0, 11469.2, 199233.8 00:07:49.969 complete (in ns) avg, min, max = 8485.7, 7391.5, 197976.2 00:07:49.969 00:07:49.969 Submit histogram 00:07:49.969 ================ 00:07:49.969 Range in us Cumulative Count 00:07:49.969 11.422 - 11.471: 0.0121% ( 1) 00:07:49.969 11.569 - 11.618: 0.0242% ( 1) 00:07:49.969 11.618 - 11.668: 0.1573% ( 11) 00:07:49.969 11.668 - 11.717: 0.4598% ( 25) 00:07:49.969 11.717 - 11.766: 1.4642% ( 83) 00:07:49.969 11.766 - 11.815: 3.6060% ( 177) 00:07:49.969 11.815 - 11.865: 6.5828% ( 246) 00:07:49.969 11.865 - 11.914: 10.8785% ( 355) 00:07:49.969 11.914 - 11.963: 15.9366% ( 418) 00:07:49.969 11.963 - 12.012: 21.6239% ( 470) 00:07:49.969 12.012 - 12.062: 28.4608% ( 565) 00:07:49.969 12.062 - 12.111: 35.7212% ( 600) 00:07:49.969 12.111 - 12.160: 42.4371% ( 555) 00:07:49.969 12.160 - 12.209: 48.8746% ( 532) 00:07:49.969 12.209 - 12.258: 55.1549% ( 519) 00:07:49.969 12.258 - 12.308: 60.6970% ( 458) 00:07:49.969 12.308 - 12.357: 65.6583% ( 410) 00:07:49.969 12.357 - 12.406: 70.2323% ( 378) 00:07:49.969 12.406 - 12.455: 73.6689% ( 284) 00:07:49.969 12.455 - 12.505: 76.3916% ( 225) 00:07:49.969 12.505 - 12.554: 78.6181% ( 184) 00:07:49.969 12.554 - 12.603: 80.8204% ( 182) 00:07:49.969 12.603 - 12.702: 83.4221% ( 215) 00:07:49.969 12.702 - 12.800: 85.0315% ( 133) 00:07:49.969 12.800 - 12.898: 86.2173% ( 98) 00:07:49.969 12.898 - 12.997: 86.9918% ( 64) 00:07:49.969 12.997 - 13.095: 87.6815% ( 57) 00:07:49.969 13.095 - 13.194: 88.0203% ( 28) 00:07:49.969 13.194 - 13.292: 88.2502% ( 19) 00:07:49.969 13.292 - 13.391: 88.4318% ( 15) 00:07:49.969 13.391 - 13.489: 88.5407% ( 9) 00:07:49.969 13.489 - 13.588: 88.6738% ( 11) 00:07:49.969 13.588 - 13.686: 88.7101% ( 3) 00:07:49.969 13.686 - 13.785: 88.7585% ( 4) 00:07:49.969 13.785 - 13.883: 88.7948% ( 3) 00:07:49.969 13.883 - 13.982: 88.9400% ( 12) 00:07:49.969 13.982 - 14.080: 89.0610% ( 10) 00:07:49.969 14.080 - 14.178: 89.1699% ( 9) 00:07:49.969 14.178 - 14.277: 89.3272% ( 13) 00:07:49.969 14.277 - 14.375: 89.5571% ( 19) 00:07:49.969 14.375 - 14.474: 89.7870% ( 19) 00:07:49.969 14.474 - 14.572: 90.0895% ( 25) 00:07:49.969 14.572 - 14.671: 90.2106% ( 10) 00:07:49.969 14.671 - 14.769: 90.3679% ( 13) 00:07:49.969 14.769 - 14.868: 90.5131% ( 12) 00:07:49.969 14.868 - 14.966: 90.6946% ( 15) 00:07:49.969 14.966 - 15.065: 90.8761% ( 15) 00:07:49.969 15.065 - 15.163: 91.0576% ( 15) 00:07:49.969 15.163 - 15.262: 91.2512% ( 16) 00:07:49.969 15.262 - 15.360: 91.3964% ( 12) 00:07:49.969 15.360 - 15.458: 91.5658% ( 14) 00:07:49.969 15.458 - 15.557: 91.7957% ( 19) 00:07:49.969 15.557 - 15.655: 92.0862% ( 24) 00:07:49.969 15.655 - 15.754: 92.3766% ( 24) 00:07:49.969 15.754 - 15.852: 92.9332% ( 46) 00:07:49.969 15.852 - 15.951: 93.4414% ( 42) 00:07:49.969 15.951 - 16.049: 94.0586% ( 51) 00:07:49.969 16.049 - 16.148: 94.7362% ( 56) 00:07:49.969 16.148 - 16.246: 95.3896% ( 54) 00:07:49.969 16.246 - 16.345: 95.7769% ( 32) 00:07:49.969 16.345 - 16.443: 96.2367% ( 38) 00:07:49.969 16.443 - 16.542: 96.6239% ( 32) 00:07:49.969 16.542 - 16.640: 96.7812% ( 13) 00:07:49.969 16.640 - 16.738: 96.9748% ( 16) 00:07:49.969 16.738 - 16.837: 97.1200% ( 12) 00:07:49.969 16.837 - 16.935: 97.2168% ( 8) 00:07:49.969 16.935 - 17.034: 97.3015% ( 7) 00:07:49.969 17.034 - 17.132: 97.3742% ( 6) 00:07:49.969 17.132 - 17.231: 97.4589% ( 7) 00:07:49.969 17.231 - 17.329: 97.5194% ( 5) 00:07:49.969 17.329 - 17.428: 97.5678% ( 4) 00:07:49.969 17.428 - 17.526: 97.6767% ( 9) 00:07:49.969 17.526 - 17.625: 97.7009% ( 2) 00:07:49.969 17.625 - 17.723: 97.7735% ( 6) 00:07:49.969 17.723 - 17.822: 97.8703% ( 8) 00:07:49.969 17.822 - 17.920: 97.9429% ( 6) 00:07:49.969 17.920 - 18.018: 97.9792% ( 3) 00:07:49.969 18.018 - 18.117: 98.0518% ( 6) 00:07:49.969 18.117 - 18.215: 98.1849% ( 11) 00:07:49.969 18.215 - 18.314: 98.3059% ( 10) 00:07:49.969 18.314 - 18.412: 98.3785% ( 6) 00:07:49.969 18.412 - 18.511: 98.4874% ( 9) 00:07:49.969 18.511 - 18.609: 98.5358% ( 4) 00:07:49.969 18.609 - 18.708: 98.6326% ( 8) 00:07:49.969 18.708 - 18.806: 98.7536% ( 10) 00:07:49.969 18.806 - 18.905: 98.7899% ( 3) 00:07:49.969 18.905 - 19.003: 98.8625% ( 6) 00:07:49.969 19.003 - 19.102: 98.9593% ( 8) 00:07:49.969 19.102 - 19.200: 98.9835% ( 2) 00:07:49.969 19.200 - 19.298: 99.0561% ( 6) 00:07:49.969 19.298 - 19.397: 99.0803% ( 2) 00:07:49.969 19.397 - 19.495: 99.0924% ( 1) 00:07:49.969 19.495 - 19.594: 99.1167% ( 2) 00:07:49.969 19.594 - 19.692: 99.1530% ( 3) 00:07:49.969 19.692 - 19.791: 99.1772% ( 2) 00:07:49.969 19.791 - 19.889: 99.1893% ( 1) 00:07:49.969 19.889 - 19.988: 99.2135% ( 2) 00:07:49.969 19.988 - 20.086: 99.2498% ( 3) 00:07:49.969 20.086 - 20.185: 99.2619% ( 1) 00:07:49.969 20.185 - 20.283: 99.2740% ( 1) 00:07:49.969 20.283 - 20.382: 99.2861% ( 1) 00:07:49.969 20.578 - 20.677: 99.3103% ( 2) 00:07:49.969 20.775 - 20.874: 99.3224% ( 1) 00:07:49.969 20.874 - 20.972: 99.3587% ( 3) 00:07:49.969 21.071 - 21.169: 99.3950% ( 3) 00:07:49.969 21.169 - 21.268: 99.4071% ( 1) 00:07:49.969 21.366 - 21.465: 99.4313% ( 2) 00:07:49.969 21.465 - 21.563: 99.4434% ( 1) 00:07:49.969 21.563 - 21.662: 99.4555% ( 1) 00:07:49.969 21.662 - 21.760: 99.4676% ( 1) 00:07:49.969 21.858 - 21.957: 99.4797% ( 1) 00:07:49.969 21.957 - 22.055: 99.4918% ( 1) 00:07:49.969 22.154 - 22.252: 99.5039% ( 1) 00:07:49.969 22.351 - 22.449: 99.5160% ( 1) 00:07:49.969 22.449 - 22.548: 99.5402% ( 2) 00:07:49.969 22.646 - 22.745: 99.5523% ( 1) 00:07:49.969 23.040 - 23.138: 99.5644% ( 1) 00:07:49.969 23.237 - 23.335: 99.5765% ( 1) 00:07:49.969 23.532 - 23.631: 99.5886% ( 1) 00:07:49.969 23.631 - 23.729: 99.6007% ( 1) 00:07:49.969 24.123 - 24.222: 99.6128% ( 1) 00:07:49.970 24.222 - 24.320: 99.6249% ( 1) 00:07:49.970 24.418 - 24.517: 99.6370% ( 1) 00:07:49.970 24.517 - 24.615: 99.6491% ( 1) 00:07:49.970 24.615 - 24.714: 99.6612% ( 1) 00:07:49.970 24.911 - 25.009: 99.6733% ( 1) 00:07:49.970 25.403 - 25.600: 99.6854% ( 1) 00:07:49.970 25.600 - 25.797: 99.6975% ( 1) 00:07:49.970 25.797 - 25.994: 99.7217% ( 2) 00:07:49.970 26.191 - 26.388: 99.7338% ( 1) 00:07:49.970 26.388 - 26.585: 99.7459% ( 1) 00:07:49.970 26.585 - 26.782: 99.7701% ( 2) 00:07:49.970 27.372 - 27.569: 99.7943% ( 2) 00:07:49.970 27.569 - 27.766: 99.8185% ( 2) 00:07:49.970 27.766 - 27.963: 99.8306% ( 1) 00:07:49.970 28.160 - 28.357: 99.8427% ( 1) 00:07:49.970 29.538 - 29.735: 99.8548% ( 1) 00:07:49.970 30.326 - 30.523: 99.8669% ( 1) 00:07:49.970 30.523 - 30.720: 99.8790% ( 1) 00:07:49.970 30.917 - 31.114: 99.8911% ( 1) 00:07:49.970 32.689 - 32.886: 99.9032% ( 1) 00:07:49.970 36.037 - 36.234: 99.9153% ( 1) 00:07:49.970 36.628 - 36.825: 99.9274% ( 1) 00:07:49.970 38.597 - 38.794: 99.9395% ( 1) 00:07:49.970 39.385 - 39.582: 99.9516% ( 1) 00:07:49.970 44.505 - 44.702: 99.9637% ( 1) 00:07:49.970 46.277 - 46.474: 99.9758% ( 1) 00:07:49.970 61.440 - 61.834: 99.9879% ( 1) 00:07:49.970 198.498 - 199.286: 100.0000% ( 1) 00:07:49.970 00:07:49.970 Complete histogram 00:07:49.970 ================== 00:07:49.970 Range in us Cumulative Count 00:07:49.970 7.385 - 7.434: 0.0968% ( 8) 00:07:49.970 7.434 - 7.483: 0.4719% ( 31) 00:07:49.970 7.483 - 7.532: 1.9240% ( 120) 00:07:49.970 7.532 - 7.582: 4.3199% ( 198) 00:07:49.970 7.582 - 7.631: 7.4419% ( 258) 00:07:49.970 7.631 - 7.680: 11.2778% ( 317) 00:07:49.970 7.680 - 7.729: 14.6539% ( 279) 00:07:49.970 7.729 - 7.778: 17.6186% ( 245) 00:07:49.970 7.778 - 7.828: 19.7362% ( 175) 00:07:49.970 7.828 - 7.877: 22.1321% ( 198) 00:07:49.970 7.877 - 7.926: 26.7183% ( 379) 00:07:49.970 7.926 - 7.975: 32.9017% ( 511) 00:07:49.970 7.975 - 8.025: 40.0895% ( 594) 00:07:49.970 8.025 - 8.074: 48.7899% ( 719) 00:07:49.970 8.074 - 8.123: 57.0426% ( 682) 00:07:49.970 8.123 - 8.172: 64.7628% ( 638) 00:07:49.970 8.172 - 8.222: 70.9826% ( 514) 00:07:49.970 8.222 - 8.271: 76.2101% ( 432) 00:07:49.970 8.271 - 8.320: 80.3727% ( 344) 00:07:49.970 8.320 - 8.369: 83.1922% ( 233) 00:07:49.970 8.369 - 8.418: 85.6244% ( 201) 00:07:49.970 8.418 - 8.468: 87.3306% ( 141) 00:07:49.970 8.468 - 8.517: 88.2865% ( 79) 00:07:49.970 8.517 - 8.566: 89.1578% ( 72) 00:07:49.970 8.566 - 8.615: 89.5692% ( 34) 00:07:49.970 8.615 - 8.665: 90.0290% ( 38) 00:07:49.970 8.665 - 8.714: 90.2348% ( 17) 00:07:49.970 8.714 - 8.763: 90.3800% ( 12) 00:07:49.970 8.763 - 8.812: 90.5373% ( 13) 00:07:49.970 8.812 - 8.862: 90.6220% ( 7) 00:07:49.970 8.862 - 8.911: 90.7309% ( 9) 00:07:49.970 8.911 - 8.960: 90.8398% ( 9) 00:07:49.970 8.960 - 9.009: 90.8761% ( 3) 00:07:49.970 9.009 - 9.058: 90.9003% ( 2) 00:07:49.970 9.058 - 9.108: 90.9366% ( 3) 00:07:49.970 9.108 - 9.157: 90.9971% ( 5) 00:07:49.970 9.157 - 9.206: 91.0213% ( 2) 00:07:49.970 9.206 - 9.255: 91.0334% ( 1) 00:07:49.970 9.305 - 9.354: 91.0455% ( 1) 00:07:49.970 9.354 - 9.403: 91.0576% ( 1) 00:07:49.970 9.403 - 9.452: 91.0818% ( 2) 00:07:49.970 9.452 - 9.502: 91.1060% ( 2) 00:07:49.970 9.600 - 9.649: 91.1181% ( 1) 00:07:49.970 9.649 - 9.698: 91.1302% ( 1) 00:07:49.970 9.748 - 9.797: 91.1544% ( 2) 00:07:49.970 9.895 - 9.945: 91.1786% ( 2) 00:07:49.970 9.994 - 10.043: 91.2028% ( 2) 00:07:49.970 10.092 - 10.142: 91.2270% ( 2) 00:07:49.970 10.142 - 10.191: 91.2391% ( 1) 00:07:49.970 10.191 - 10.240: 91.2875% ( 4) 00:07:49.970 10.289 - 10.338: 91.3359% ( 4) 00:07:49.970 10.338 - 10.388: 91.3601% ( 2) 00:07:49.970 10.388 - 10.437: 91.4206% ( 5) 00:07:49.970 10.486 - 10.535: 91.4569% ( 3) 00:07:49.970 10.535 - 10.585: 91.5174% ( 5) 00:07:49.970 10.585 - 10.634: 91.5900% ( 6) 00:07:49.970 10.634 - 10.683: 91.6747% ( 7) 00:07:49.970 10.683 - 10.732: 91.7594% ( 7) 00:07:49.970 10.732 - 10.782: 91.8199% ( 5) 00:07:49.970 10.782 - 10.831: 91.8683% ( 4) 00:07:49.970 10.831 - 10.880: 91.9046% ( 3) 00:07:49.970 10.880 - 10.929: 91.9288% ( 2) 00:07:49.970 10.929 - 10.978: 92.0015% ( 6) 00:07:49.970 10.978 - 11.028: 92.0378% ( 3) 00:07:49.970 11.028 - 11.077: 92.0620% ( 2) 00:07:49.970 11.077 - 11.126: 92.1104% ( 4) 00:07:49.970 11.126 - 11.175: 92.2435% ( 11) 00:07:49.970 11.175 - 11.225: 92.3040% ( 5) 00:07:49.970 11.225 - 11.274: 92.3524% ( 4) 00:07:49.970 11.274 - 11.323: 92.4008% ( 4) 00:07:49.970 11.323 - 11.372: 92.4734% ( 6) 00:07:49.970 11.372 - 11.422: 92.5460% ( 6) 00:07:49.970 11.422 - 11.471: 92.6307% ( 7) 00:07:49.970 11.471 - 11.520: 92.7275% ( 8) 00:07:49.970 11.520 - 11.569: 92.8969% ( 14) 00:07:49.970 11.569 - 11.618: 93.1147% ( 18) 00:07:49.970 11.618 - 11.668: 93.3446% ( 19) 00:07:49.970 11.668 - 11.717: 93.6229% ( 23) 00:07:49.970 11.717 - 11.766: 94.0465% ( 35) 00:07:49.970 11.766 - 11.815: 94.5305% ( 40) 00:07:49.970 11.815 - 11.865: 94.9177% ( 32) 00:07:49.970 11.865 - 11.914: 95.3896% ( 39) 00:07:49.970 11.914 - 11.963: 95.8858% ( 41) 00:07:49.970 11.963 - 12.012: 96.1883% ( 25) 00:07:49.970 12.012 - 12.062: 96.4787% ( 24) 00:07:49.970 12.062 - 12.111: 96.6965% ( 18) 00:07:49.970 12.111 - 12.160: 96.9627% ( 22) 00:07:49.970 12.160 - 12.209: 97.1805% ( 18) 00:07:49.970 12.209 - 12.258: 97.3136% ( 11) 00:07:49.970 12.258 - 12.308: 97.3863% ( 6) 00:07:49.970 12.308 - 12.357: 97.5315% ( 12) 00:07:49.970 12.357 - 12.406: 97.6646% ( 11) 00:07:49.970 12.406 - 12.455: 97.7735% ( 9) 00:07:49.970 12.455 - 12.505: 97.8098% ( 3) 00:07:49.970 12.505 - 12.554: 97.8703% ( 5) 00:07:49.970 12.554 - 12.603: 97.8945% ( 2) 00:07:49.970 12.603 - 12.702: 98.0276% ( 11) 00:07:49.970 12.702 - 12.800: 98.1002% ( 6) 00:07:49.970 12.800 - 12.898: 98.1123% ( 1) 00:07:49.970 12.898 - 12.997: 98.1365% ( 2) 00:07:49.970 13.095 - 13.194: 98.1486% ( 1) 00:07:49.970 13.194 - 13.292: 98.1970% ( 4) 00:07:49.970 13.292 - 13.391: 98.2212% ( 2) 00:07:49.970 13.391 - 13.489: 98.2938% ( 6) 00:07:49.970 13.489 - 13.588: 98.3543% ( 5) 00:07:49.970 13.588 - 13.686: 98.4148% ( 5) 00:07:49.970 13.686 - 13.785: 98.4511% ( 3) 00:07:49.970 13.785 - 13.883: 98.5237% ( 6) 00:07:49.970 13.883 - 13.982: 98.5721% ( 4) 00:07:49.970 13.982 - 14.080: 98.6326% ( 5) 00:07:49.970 14.080 - 14.178: 98.6810% ( 4) 00:07:49.970 14.178 - 14.277: 98.7657% ( 7) 00:07:49.970 14.277 - 14.375: 98.8746% ( 9) 00:07:49.970 14.375 - 14.474: 98.9351% ( 5) 00:07:49.970 14.474 - 14.572: 99.0561% ( 10) 00:07:49.970 14.572 - 14.671: 99.0682% ( 1) 00:07:49.970 14.671 - 14.769: 99.1651% ( 8) 00:07:49.970 14.769 - 14.868: 99.2135% ( 4) 00:07:49.970 14.868 - 14.966: 99.2982% ( 7) 00:07:49.970 14.966 - 15.065: 99.3708% ( 6) 00:07:49.970 15.065 - 15.163: 99.4071% ( 3) 00:07:49.970 15.163 - 15.262: 99.4313% ( 2) 00:07:49.970 15.262 - 15.360: 99.4555% ( 2) 00:07:49.970 15.360 - 15.458: 99.4918% ( 3) 00:07:49.970 15.458 - 15.557: 99.5402% ( 4) 00:07:49.970 15.655 - 15.754: 99.5644% ( 2) 00:07:49.970 16.049 - 16.148: 99.5765% ( 1) 00:07:49.970 16.148 - 16.246: 99.5886% ( 1) 00:07:49.970 16.246 - 16.345: 99.6007% ( 1) 00:07:49.970 16.443 - 16.542: 99.6128% ( 1) 00:07:49.970 16.935 - 17.034: 99.6370% ( 2) 00:07:49.970 18.117 - 18.215: 99.6491% ( 1) 00:07:49.970 18.412 - 18.511: 99.6612% ( 1) 00:07:49.970 18.511 - 18.609: 99.6733% ( 1) 00:07:49.970 18.806 - 18.905: 99.6854% ( 1) 00:07:49.970 18.905 - 19.003: 99.7096% ( 2) 00:07:49.970 19.003 - 19.102: 99.7217% ( 1) 00:07:49.970 19.102 - 19.200: 99.7338% ( 1) 00:07:49.970 19.495 - 19.594: 99.7459% ( 1) 00:07:49.970 19.594 - 19.692: 99.7701% ( 2) 00:07:49.970 19.692 - 19.791: 99.7822% ( 1) 00:07:49.970 20.677 - 20.775: 99.7943% ( 1) 00:07:49.970 20.972 - 21.071: 99.8064% ( 1) 00:07:49.970 21.268 - 21.366: 99.8306% ( 2) 00:07:49.970 21.465 - 21.563: 99.8548% ( 2) 00:07:49.970 22.252 - 22.351: 99.8669% ( 1) 00:07:49.970 22.942 - 23.040: 99.8790% ( 1) 00:07:49.970 24.025 - 24.123: 99.8911% ( 1) 00:07:49.970 24.222 - 24.320: 99.9032% ( 1) 00:07:49.970 26.585 - 26.782: 99.9153% ( 1) 00:07:49.970 29.735 - 29.932: 99.9274% ( 1) 00:07:49.970 36.431 - 36.628: 99.9395% ( 1) 00:07:49.970 56.714 - 57.108: 99.9516% ( 1) 00:07:49.970 59.077 - 59.471: 99.9637% ( 1) 00:07:49.970 71.680 - 72.074: 99.9758% ( 1) 00:07:49.970 87.434 - 87.828: 99.9879% ( 1) 00:07:49.971 197.711 - 198.498: 100.0000% ( 1) 00:07:49.971 00:07:49.971 ************************************ 00:07:49.971 END TEST nvme_overhead 00:07:49.971 ************************************ 00:07:49.971 00:07:49.971 real 0m1.246s 00:07:49.971 user 0m1.080s 00:07:49.971 sys 0m0.106s 00:07:49.971 06:31:41 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.971 06:31:41 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:49.971 06:31:41 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:49.971 06:31:41 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:49.971 06:31:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.971 06:31:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.971 ************************************ 00:07:49.971 START TEST nvme_arbitration 00:07:49.971 ************************************ 00:07:49.971 06:31:41 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:53.257 Initializing NVMe Controllers 00:07:53.257 Attached to 0000:00:10.0 00:07:53.257 Attached to 0000:00:11.0 00:07:53.257 Attached to 0000:00:13.0 00:07:53.257 Attached to 0000:00:12.0 00:07:53.257 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:53.257 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:53.257 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:53.257 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:53.257 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:53.257 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:53.257 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:53.257 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:53.257 Initialization complete. Launching workers. 00:07:53.257 Starting thread on core 1 with urgent priority queue 00:07:53.257 Starting thread on core 2 with urgent priority queue 00:07:53.257 Starting thread on core 3 with urgent priority queue 00:07:53.257 Starting thread on core 0 with urgent priority queue 00:07:53.257 QEMU NVMe Ctrl (12340 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:07:53.257 QEMU NVMe Ctrl (12342 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:07:53.257 QEMU NVMe Ctrl (12341 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:53.257 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:53.257 QEMU NVMe Ctrl (12343 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:07:53.257 QEMU NVMe Ctrl (12342 ) core 3: 853.33 IO/s 117.19 secs/100000 ios 00:07:53.257 ======================================================== 00:07:53.257 00:07:53.257 ************************************ 00:07:53.257 END TEST nvme_arbitration 00:07:53.257 ************************************ 00:07:53.257 00:07:53.257 real 0m3.347s 00:07:53.257 user 0m9.288s 00:07:53.257 sys 0m0.111s 00:07:53.257 06:31:45 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.257 06:31:45 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:53.257 06:31:45 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:53.257 06:31:45 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:53.257 06:31:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.257 06:31:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.257 ************************************ 00:07:53.257 START TEST nvme_single_aen 00:07:53.257 ************************************ 00:07:53.257 06:31:45 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:53.516 Asynchronous Event Request test 00:07:53.516 Attached to 0000:00:10.0 00:07:53.516 Attached to 0000:00:11.0 00:07:53.516 Attached to 0000:00:13.0 00:07:53.516 Attached to 0000:00:12.0 00:07:53.516 Reset controller to setup AER completions for this process 00:07:53.516 Registering asynchronous event callbacks... 00:07:53.516 Getting orig temperature thresholds of all controllers 00:07:53.516 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:53.516 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:53.516 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:53.516 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:53.516 Setting all controllers temperature threshold low to trigger AER 00:07:53.516 Waiting for all controllers temperature threshold to be set lower 00:07:53.516 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:53.516 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:53.516 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:53.516 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:53.516 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:53.516 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:53.516 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:53.516 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:53.516 Waiting for all controllers to trigger AER and reset threshold 00:07:53.516 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.516 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.516 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.516 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.516 Cleaning up... 00:07:53.516 ************************************ 00:07:53.516 END TEST nvme_single_aen 00:07:53.516 ************************************ 00:07:53.516 00:07:53.516 real 0m0.221s 00:07:53.516 user 0m0.076s 00:07:53.516 sys 0m0.101s 00:07:53.516 06:31:45 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.516 06:31:45 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:53.516 06:31:45 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:53.516 06:31:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.516 06:31:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.516 06:31:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.516 ************************************ 00:07:53.516 START TEST nvme_doorbell_aers 00:07:53.516 ************************************ 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:53.516 06:31:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:53.773 06:31:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:53.773 06:31:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:53.773 06:31:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:53.773 06:31:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:53.773 [2024-11-19 06:31:45.672671] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:03.736 Executing: test_write_invalid_db 00:08:03.736 Waiting for AER completion... 00:08:03.736 Failure: test_write_invalid_db 00:08:03.736 00:08:03.736 Executing: test_invalid_db_write_overflow_sq 00:08:03.736 Waiting for AER completion... 00:08:03.736 Failure: test_invalid_db_write_overflow_sq 00:08:03.736 00:08:03.736 Executing: test_invalid_db_write_overflow_cq 00:08:03.736 Waiting for AER completion... 00:08:03.736 Failure: test_invalid_db_write_overflow_cq 00:08:03.736 00:08:03.736 06:31:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:03.736 06:31:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:03.994 [2024-11-19 06:31:55.703859] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:13.960 Executing: test_write_invalid_db 00:08:13.960 Waiting for AER completion... 00:08:13.960 Failure: test_write_invalid_db 00:08:13.960 00:08:13.960 Executing: test_invalid_db_write_overflow_sq 00:08:13.960 Waiting for AER completion... 00:08:13.960 Failure: test_invalid_db_write_overflow_sq 00:08:13.960 00:08:13.960 Executing: test_invalid_db_write_overflow_cq 00:08:13.960 Waiting for AER completion... 00:08:13.960 Failure: test_invalid_db_write_overflow_cq 00:08:13.960 00:08:13.960 06:32:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:13.960 06:32:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:13.960 [2024-11-19 06:32:05.735057] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:23.920 Executing: test_write_invalid_db 00:08:23.920 Waiting for AER completion... 00:08:23.920 Failure: test_write_invalid_db 00:08:23.920 00:08:23.920 Executing: test_invalid_db_write_overflow_sq 00:08:23.920 Waiting for AER completion... 00:08:23.920 Failure: test_invalid_db_write_overflow_sq 00:08:23.920 00:08:23.920 Executing: test_invalid_db_write_overflow_cq 00:08:23.920 Waiting for AER completion... 00:08:23.920 Failure: test_invalid_db_write_overflow_cq 00:08:23.920 00:08:23.920 06:32:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:23.920 06:32:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:23.920 [2024-11-19 06:32:15.764586] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:33.879 Executing: test_write_invalid_db 00:08:33.879 Waiting for AER completion... 00:08:33.879 Failure: test_write_invalid_db 00:08:33.879 00:08:33.879 Executing: test_invalid_db_write_overflow_sq 00:08:33.879 Waiting for AER completion... 00:08:33.879 Failure: test_invalid_db_write_overflow_sq 00:08:33.880 00:08:33.880 Executing: test_invalid_db_write_overflow_cq 00:08:33.880 Waiting for AER completion... 00:08:33.880 Failure: test_invalid_db_write_overflow_cq 00:08:33.880 00:08:33.880 00:08:33.880 real 0m40.189s 00:08:33.880 user 0m34.176s 00:08:33.880 sys 0m5.632s 00:08:33.880 06:32:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.880 ************************************ 00:08:33.880 06:32:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:33.880 END TEST nvme_doorbell_aers 00:08:33.880 ************************************ 00:08:33.880 06:32:25 nvme -- nvme/nvme.sh@97 -- # uname 00:08:33.880 06:32:25 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:33.880 06:32:25 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:33.880 06:32:25 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:33.880 06:32:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.880 06:32:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.880 ************************************ 00:08:33.880 START TEST nvme_multi_aen 00:08:33.880 ************************************ 00:08:33.880 06:32:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:34.138 [2024-11-19 06:32:25.817432] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.817502] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.817512] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.819350] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.819471] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.819529] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.820729] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.820829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.820883] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.821933] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.822011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 [2024-11-19 06:32:25.822021] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63249) is not found. Dropping the request. 00:08:34.138 Child process pid: 63775 00:08:34.138 [Child] Asynchronous Event Request test 00:08:34.138 [Child] Attached to 0000:00:10.0 00:08:34.138 [Child] Attached to 0000:00:11.0 00:08:34.138 [Child] Attached to 0000:00:13.0 00:08:34.138 [Child] Attached to 0000:00:12.0 00:08:34.138 [Child] Registering asynchronous event callbacks... 00:08:34.138 [Child] Getting orig temperature thresholds of all controllers 00:08:34.138 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.138 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.138 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.138 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.138 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:34.138 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.138 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.138 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.138 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.138 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.138 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.138 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.138 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.138 [Child] Cleaning up... 00:08:34.395 Asynchronous Event Request test 00:08:34.396 Attached to 0000:00:10.0 00:08:34.396 Attached to 0000:00:11.0 00:08:34.396 Attached to 0000:00:13.0 00:08:34.396 Attached to 0000:00:12.0 00:08:34.396 Reset controller to setup AER completions for this process 00:08:34.396 Registering asynchronous event callbacks... 00:08:34.396 Getting orig temperature thresholds of all controllers 00:08:34.396 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.396 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.396 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.396 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.396 Setting all controllers temperature threshold low to trigger AER 00:08:34.396 Waiting for all controllers temperature threshold to be set lower 00:08:34.396 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.396 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:34.396 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.396 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:34.396 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.396 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:34.396 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.396 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:34.396 Waiting for all controllers to trigger AER and reset threshold 00:08:34.396 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.396 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.396 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.396 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.396 Cleaning up... 00:08:34.396 00:08:34.396 real 0m0.432s 00:08:34.396 user 0m0.128s 00:08:34.396 sys 0m0.194s 00:08:34.396 06:32:26 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.396 06:32:26 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:34.396 ************************************ 00:08:34.396 END TEST nvme_multi_aen 00:08:34.396 ************************************ 00:08:34.396 06:32:26 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:34.396 06:32:26 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:34.396 06:32:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.396 06:32:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.396 ************************************ 00:08:34.396 START TEST nvme_startup 00:08:34.396 ************************************ 00:08:34.396 06:32:26 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:34.396 Initializing NVMe Controllers 00:08:34.396 Attached to 0000:00:10.0 00:08:34.396 Attached to 0000:00:11.0 00:08:34.396 Attached to 0000:00:13.0 00:08:34.396 Attached to 0000:00:12.0 00:08:34.396 Initialization complete. 00:08:34.396 Time used:143794.547 (us). 00:08:34.396 00:08:34.396 real 0m0.206s 00:08:34.396 user 0m0.061s 00:08:34.396 sys 0m0.097s 00:08:34.396 06:32:26 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.396 06:32:26 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:34.396 ************************************ 00:08:34.396 END TEST nvme_startup 00:08:34.396 ************************************ 00:08:34.676 06:32:26 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:34.676 06:32:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.676 06:32:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.676 06:32:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.676 ************************************ 00:08:34.676 START TEST nvme_multi_secondary 00:08:34.676 ************************************ 00:08:34.676 06:32:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:34.676 06:32:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63825 00:08:34.676 06:32:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63826 00:08:34.676 06:32:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:34.676 06:32:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:34.676 06:32:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:37.964 Initializing NVMe Controllers 00:08:37.964 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.964 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.964 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.964 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.964 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:37.964 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:37.964 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:37.964 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:37.964 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:37.964 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:37.964 Initialization complete. Launching workers. 00:08:37.964 ======================================================== 00:08:37.964 Latency(us) 00:08:37.964 Device Information : IOPS MiB/s Average min max 00:08:37.964 PCIE (0000:00:10.0) NSID 1 from core 2: 2401.48 9.38 6660.81 836.26 23802.14 00:08:37.964 PCIE (0000:00:11.0) NSID 1 from core 2: 2401.48 9.38 6663.02 866.62 27356.18 00:08:37.964 PCIE (0000:00:13.0) NSID 1 from core 2: 2401.48 9.38 6671.66 855.48 27168.97 00:08:37.964 PCIE (0000:00:12.0) NSID 1 from core 2: 2401.48 9.38 6672.83 850.12 26585.87 00:08:37.964 PCIE (0000:00:12.0) NSID 2 from core 2: 2401.48 9.38 6673.37 864.70 26294.35 00:08:37.964 PCIE (0000:00:12.0) NSID 3 from core 2: 2401.48 9.38 6672.80 865.27 24079.57 00:08:37.964 ======================================================== 00:08:37.964 Total : 14408.88 56.28 6669.08 836.26 27356.18 00:08:37.964 00:08:37.964 06:32:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63825 00:08:37.964 Initializing NVMe Controllers 00:08:37.964 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.964 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.964 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.965 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.965 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:37.965 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:37.965 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:37.965 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:37.965 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:37.965 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:37.965 Initialization complete. Launching workers. 00:08:37.965 ======================================================== 00:08:37.965 Latency(us) 00:08:37.965 Device Information : IOPS MiB/s Average min max 00:08:37.965 PCIE (0000:00:10.0) NSID 1 from core 1: 5136.97 20.07 3113.19 946.65 10872.71 00:08:37.965 PCIE (0000:00:11.0) NSID 1 from core 1: 5136.97 20.07 3114.40 1057.82 11650.57 00:08:37.965 PCIE (0000:00:13.0) NSID 1 from core 1: 5136.97 20.07 3114.64 1065.70 11645.95 00:08:37.965 PCIE (0000:00:12.0) NSID 1 from core 1: 5136.97 20.07 3114.65 950.44 10876.49 00:08:37.965 PCIE (0000:00:12.0) NSID 2 from core 1: 5136.97 20.07 3114.99 939.44 11781.39 00:08:37.965 PCIE (0000:00:12.0) NSID 3 from core 1: 5136.97 20.07 3115.46 1009.75 10117.59 00:08:37.965 ======================================================== 00:08:37.965 Total : 30821.81 120.40 3114.56 939.44 11781.39 00:08:37.965 00:08:39.875 Initializing NVMe Controllers 00:08:39.875 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.875 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.875 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.875 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.875 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:39.875 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:39.875 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:39.875 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:39.875 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:39.875 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:39.875 Initialization complete. Launching workers. 00:08:39.875 ======================================================== 00:08:39.875 Latency(us) 00:08:39.875 Device Information : IOPS MiB/s Average min max 00:08:39.875 PCIE (0000:00:10.0) NSID 1 from core 0: 6973.61 27.24 2293.08 696.53 9546.23 00:08:39.875 PCIE (0000:00:11.0) NSID 1 from core 0: 6973.61 27.24 2293.96 699.53 9561.29 00:08:39.875 PCIE (0000:00:13.0) NSID 1 from core 0: 6973.61 27.24 2294.04 709.86 9066.59 00:08:39.875 PCIE (0000:00:12.0) NSID 1 from core 0: 6973.61 27.24 2294.03 721.44 9199.66 00:08:39.875 PCIE (0000:00:12.0) NSID 2 from core 0: 6973.61 27.24 2294.02 723.01 8768.34 00:08:39.875 PCIE (0000:00:12.0) NSID 3 from core 0: 6973.61 27.24 2293.99 723.40 9516.33 00:08:39.875 ======================================================== 00:08:39.875 Total : 41841.65 163.44 2293.85 696.53 9561.29 00:08:39.875 00:08:39.875 06:32:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63826 00:08:39.876 06:32:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63901 00:08:39.876 06:32:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:39.876 06:32:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63902 00:08:39.876 06:32:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:39.876 06:32:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:43.174 Initializing NVMe Controllers 00:08:43.174 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.174 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.174 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.174 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.174 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:43.174 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:43.174 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:43.174 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:43.174 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:43.175 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:43.175 Initialization complete. Launching workers. 00:08:43.175 ======================================================== 00:08:43.175 Latency(us) 00:08:43.175 Device Information : IOPS MiB/s Average min max 00:08:43.175 PCIE (0000:00:10.0) NSID 1 from core 0: 3699.34 14.45 4323.38 1130.91 11721.28 00:08:43.175 PCIE (0000:00:11.0) NSID 1 from core 0: 3699.34 14.45 4325.18 1154.98 12394.94 00:08:43.175 PCIE (0000:00:13.0) NSID 1 from core 0: 3699.34 14.45 4325.04 1112.50 12591.49 00:08:43.175 PCIE (0000:00:12.0) NSID 1 from core 0: 3699.34 14.45 4324.93 1122.51 13330.89 00:08:43.175 PCIE (0000:00:12.0) NSID 2 from core 0: 3699.34 14.45 4324.81 1158.91 13779.57 00:08:43.175 PCIE (0000:00:12.0) NSID 3 from core 0: 3704.67 14.47 4318.46 1153.78 12679.52 00:08:43.175 ======================================================== 00:08:43.175 Total : 22201.38 86.72 4323.63 1112.50 13779.57 00:08:43.175 00:08:43.175 Initializing NVMe Controllers 00:08:43.175 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.175 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.175 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.175 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.175 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:43.175 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:43.175 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:43.175 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:43.175 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:43.175 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:43.175 Initialization complete. Launching workers. 00:08:43.175 ======================================================== 00:08:43.175 Latency(us) 00:08:43.175 Device Information : IOPS MiB/s Average min max 00:08:43.175 PCIE (0000:00:10.0) NSID 1 from core 1: 3354.45 13.10 4767.86 857.72 13537.96 00:08:43.175 PCIE (0000:00:11.0) NSID 1 from core 1: 3354.45 13.10 4769.19 884.25 13157.07 00:08:43.175 PCIE (0000:00:13.0) NSID 1 from core 1: 3354.45 13.10 4769.02 882.73 13823.60 00:08:43.175 PCIE (0000:00:12.0) NSID 1 from core 1: 3354.45 13.10 4768.87 878.01 13818.22 00:08:43.175 PCIE (0000:00:12.0) NSID 2 from core 1: 3354.45 13.10 4768.69 879.96 14176.41 00:08:43.175 PCIE (0000:00:12.0) NSID 3 from core 1: 3354.45 13.10 4768.51 864.57 13685.92 00:08:43.175 ======================================================== 00:08:43.175 Total : 20126.72 78.62 4768.69 857.72 14176.41 00:08:43.175 00:08:45.719 Initializing NVMe Controllers 00:08:45.719 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.719 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.719 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.719 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.719 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:45.719 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:45.719 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:45.719 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:45.719 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:45.719 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:45.719 Initialization complete. Launching workers. 00:08:45.719 ======================================================== 00:08:45.719 Latency(us) 00:08:45.719 Device Information : IOPS MiB/s Average min max 00:08:45.719 PCIE (0000:00:10.0) NSID 1 from core 2: 2339.78 9.14 6836.18 799.91 36508.69 00:08:45.719 PCIE (0000:00:11.0) NSID 1 from core 2: 2339.78 9.14 6837.10 826.94 42943.38 00:08:45.719 PCIE (0000:00:13.0) NSID 1 from core 2: 2339.78 9.14 6832.08 822.51 32773.37 00:08:45.719 PCIE (0000:00:12.0) NSID 1 from core 2: 2339.78 9.14 6831.86 828.54 32730.08 00:08:45.719 PCIE (0000:00:12.0) NSID 2 from core 2: 2339.78 9.14 6831.67 825.27 35009.36 00:08:45.719 PCIE (0000:00:12.0) NSID 3 from core 2: 2342.98 9.15 6821.81 831.32 29534.04 00:08:45.719 ======================================================== 00:08:45.719 Total : 14041.90 54.85 6831.78 799.91 42943.38 00:08:45.719 00:08:45.719 06:32:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63901 00:08:45.719 ************************************ 00:08:45.719 END TEST nvme_multi_secondary 00:08:45.719 ************************************ 00:08:45.719 06:32:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63902 00:08:45.719 00:08:45.719 real 0m10.749s 00:08:45.719 user 0m18.401s 00:08:45.719 sys 0m0.744s 00:08:45.719 06:32:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.719 06:32:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:45.719 06:32:37 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:45.719 06:32:37 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:45.719 06:32:37 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62845 ]] 00:08:45.719 06:32:37 nvme -- common/autotest_common.sh@1094 -- # kill 62845 00:08:45.719 06:32:37 nvme -- common/autotest_common.sh@1095 -- # wait 62845 00:08:45.719 [2024-11-19 06:32:37.160739] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.160829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.160865] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.160887] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.163861] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.163946] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.163966] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.163986] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.167348] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.167432] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.167453] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.167473] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.170408] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.170476] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.170494] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 [2024-11-19 06:32:37.170513] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63774) is not found. Dropping the request. 00:08:45.719 06:32:37 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:45.719 06:32:37 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:45.719 06:32:37 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:45.719 06:32:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.719 06:32:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.719 06:32:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.719 ************************************ 00:08:45.719 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:45.719 ************************************ 00:08:45.719 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:45.719 * Looking for test storage... 00:08:45.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:45.719 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.719 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.719 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.719 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.720 --rc genhtml_branch_coverage=1 00:08:45.720 --rc genhtml_function_coverage=1 00:08:45.720 --rc genhtml_legend=1 00:08:45.720 --rc geninfo_all_blocks=1 00:08:45.720 --rc geninfo_unexecuted_blocks=1 00:08:45.720 00:08:45.720 ' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.720 --rc genhtml_branch_coverage=1 00:08:45.720 --rc genhtml_function_coverage=1 00:08:45.720 --rc genhtml_legend=1 00:08:45.720 --rc geninfo_all_blocks=1 00:08:45.720 --rc geninfo_unexecuted_blocks=1 00:08:45.720 00:08:45.720 ' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.720 --rc genhtml_branch_coverage=1 00:08:45.720 --rc genhtml_function_coverage=1 00:08:45.720 --rc genhtml_legend=1 00:08:45.720 --rc geninfo_all_blocks=1 00:08:45.720 --rc geninfo_unexecuted_blocks=1 00:08:45.720 00:08:45.720 ' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.720 --rc genhtml_branch_coverage=1 00:08:45.720 --rc genhtml_function_coverage=1 00:08:45.720 --rc genhtml_legend=1 00:08:45.720 --rc geninfo_all_blocks=1 00:08:45.720 --rc geninfo_unexecuted_blocks=1 00:08:45.720 00:08:45.720 ' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64059 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64059 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64059 ']' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:45.720 06:32:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:45.720 [2024-11-19 06:32:37.627009] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:45.720 [2024-11-19 06:32:37.627129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64059 ] 00:08:45.981 [2024-11-19 06:32:37.796173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.242 [2024-11-19 06:32:37.921590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.242 [2024-11-19 06:32:37.921772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.242 [2024-11-19 06:32:37.921941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.242 [2024-11-19 06:32:37.921963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:46.814 nvme0n1 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_4Y4ja.txt 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:46.814 true 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731997958 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64082 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:46.814 06:32:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.359 [2024-11-19 06:32:40.671668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:49.359 [2024-11-19 06:32:40.671966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:49.359 [2024-11-19 06:32:40.671989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:49.359 [2024-11-19 06:32:40.672001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:49.359 [2024-11-19 06:32:40.675979] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:49.359 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64082 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64082 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64082 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_4Y4ja.txt 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_4Y4ja.txt 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64059 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64059 ']' 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64059 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64059 00:08:49.359 killing process with pid 64059 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64059' 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64059 00:08:49.359 06:32:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64059 00:08:50.299 06:32:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:50.299 06:32:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:50.299 00:08:50.299 real 0m4.749s 00:08:50.299 user 0m16.637s 00:08:50.299 sys 0m0.568s 00:08:50.299 06:32:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.299 06:32:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.299 ************************************ 00:08:50.299 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:50.299 ************************************ 00:08:50.299 06:32:42 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:50.299 06:32:42 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:50.299 06:32:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.299 06:32:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.299 06:32:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.299 ************************************ 00:08:50.299 START TEST nvme_fio 00:08:50.299 ************************************ 00:08:50.299 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:50.299 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:50.299 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:50.299 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:50.299 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:50.299 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:50.299 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:50.299 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:50.299 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:50.299 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:50.299 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:50.299 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:50.299 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:50.299 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:50.299 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:50.299 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:50.560 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:50.560 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:50.821 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:50.821 06:32:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:50.821 06:32:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:51.083 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.083 fio-3.35 00:08:51.083 Starting 1 thread 00:08:56.357 00:08:56.357 test: (groupid=0, jobs=1): err= 0: pid=64222: Tue Nov 19 06:32:47 2024 00:08:56.357 read: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2001msec) 00:08:56.357 slat (nsec): min=3304, max=71744, avg=5706.08, stdev=2979.19 00:08:56.357 clat (usec): min=260, max=10073, avg=3642.14, stdev=1279.84 00:08:56.357 lat (usec): min=265, max=10137, avg=3647.84, stdev=1281.01 00:08:56.357 clat percentiles (usec): 00:08:56.358 | 1.00th=[ 2040], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2573], 00:08:56.358 | 30.00th=[ 2737], 40.00th=[ 2933], 50.00th=[ 3163], 60.00th=[ 3523], 00:08:56.358 | 70.00th=[ 4146], 80.00th=[ 4817], 90.00th=[ 5604], 95.00th=[ 6194], 00:08:56.358 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 8717], 99.95th=[ 8979], 00:08:56.358 | 99.99th=[10028] 00:08:56.358 bw ( KiB/s): min=60910, max=79032, per=100.00%, avg=71511.33, stdev=9445.61, samples=3 00:08:56.358 iops : min=15227, max=19758, avg=17877.67, stdev=2361.68, samples=3 00:08:56.358 write: IOPS=17.4k, BW=68.1MiB/s (71.4MB/s)(136MiB/2001msec); 0 zone resets 00:08:56.358 slat (nsec): min=3346, max=71604, avg=5978.77, stdev=3196.23 00:08:56.358 clat (usec): min=269, max=9997, avg=3671.81, stdev=1291.18 00:08:56.358 lat (usec): min=273, max=10017, avg=3677.79, stdev=1292.40 00:08:56.358 clat percentiles (usec): 00:08:56.358 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:08:56.358 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3195], 60.00th=[ 3556], 00:08:56.358 | 70.00th=[ 4228], 80.00th=[ 4883], 90.00th=[ 5669], 95.00th=[ 6194], 00:08:56.358 | 99.00th=[ 7242], 99.50th=[ 7635], 99.90th=[ 8848], 99.95th=[ 9241], 00:08:56.358 | 99.99th=[ 9896] 00:08:56.358 bw ( KiB/s): min=60654, max=79080, per=100.00%, avg=71447.33, stdev=9611.02, samples=3 00:08:56.358 iops : min=15163, max=19770, avg=17861.67, stdev=2403.04, samples=3 00:08:56.358 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:08:56.358 lat (msec) : 2=0.69%, 4=66.82%, 10=32.45%, 20=0.01% 00:08:56.358 cpu : usr=98.85%, sys=0.00%, ctx=8, majf=0, minf=607 00:08:56.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:56.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.358 issued rwts: total=34854,34903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.358 00:08:56.358 Run status group 0 (all jobs): 00:08:56.358 READ: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2001-2001msec 00:08:56.358 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=136MiB (143MB), run=2001-2001msec 00:08:56.358 ----------------------------------------------------- 00:08:56.358 Suppressions used: 00:08:56.358 count bytes template 00:08:56.358 1 32 /usr/src/fio/parse.c 00:08:56.358 1 8 libtcmalloc_minimal.so 00:08:56.358 ----------------------------------------------------- 00:08:56.358 00:08:56.358 06:32:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:56.358 06:32:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:56.358 06:32:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:56.358 06:32:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:56.358 06:32:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:56.358 06:32:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:56.358 06:32:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:56.358 06:32:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:56.358 06:32:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:56.358 06:32:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:56.358 06:32:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:56.358 06:32:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:56.358 06:32:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:56.358 06:32:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:56.358 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:56.358 fio-3.35 00:08:56.358 Starting 1 thread 00:09:02.917 00:09:02.917 test: (groupid=0, jobs=1): err= 0: pid=64277: Tue Nov 19 06:32:54 2024 00:09:02.917 read: IOPS=22.9k, BW=89.3MiB/s (93.7MB/s)(179MiB/2001msec) 00:09:02.917 slat (nsec): min=3375, max=84134, avg=5062.93, stdev=2192.12 00:09:02.917 clat (usec): min=207, max=8428, avg=2791.54, stdev=821.59 00:09:02.917 lat (usec): min=211, max=8433, avg=2796.60, stdev=822.88 00:09:02.917 clat percentiles (usec): 00:09:02.917 | 1.00th=[ 2089], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:02.917 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:09:02.917 | 70.00th=[ 2671], 80.00th=[ 2802], 90.00th=[ 3621], 95.00th=[ 4817], 00:09:02.917 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7439], 99.95th=[ 7701], 00:09:02.917 | 99.99th=[ 8029] 00:09:02.917 bw ( KiB/s): min=84792, max=97704, per=98.37%, avg=89986.67, stdev=6815.63, samples=3 00:09:02.917 iops : min=21198, max=24426, avg=22496.67, stdev=1703.91, samples=3 00:09:02.917 write: IOPS=22.7k, BW=88.8MiB/s (93.1MB/s)(178MiB/2001msec); 0 zone resets 00:09:02.917 slat (usec): min=3, max=241, avg= 5.34, stdev= 2.47 00:09:02.917 clat (usec): min=272, max=8128, avg=2799.98, stdev=827.59 00:09:02.917 lat (usec): min=277, max=8158, avg=2805.32, stdev=828.90 00:09:02.917 clat percentiles (usec): 00:09:02.917 | 1.00th=[ 2114], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:02.917 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:09:02.917 | 70.00th=[ 2671], 80.00th=[ 2802], 90.00th=[ 3621], 95.00th=[ 4883], 00:09:02.917 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 7701], 00:09:02.917 | 99.99th=[ 7898] 00:09:02.917 bw ( KiB/s): min=84672, max=98720, per=99.23%, avg=90216.00, stdev=7477.15, samples=3 00:09:02.917 iops : min=21168, max=24680, avg=22554.00, stdev=1869.29, samples=3 00:09:02.917 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:02.917 lat (msec) : 2=0.34%, 4=91.82%, 10=7.80% 00:09:02.917 cpu : usr=99.20%, sys=0.05%, ctx=7, majf=0, minf=607 00:09:02.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:02.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.917 issued rwts: total=45761,45480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.917 00:09:02.918 Run status group 0 (all jobs): 00:09:02.918 READ: bw=89.3MiB/s (93.7MB/s), 89.3MiB/s-89.3MiB/s (93.7MB/s-93.7MB/s), io=179MiB (187MB), run=2001-2001msec 00:09:02.918 WRITE: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=178MiB (186MB), run=2001-2001msec 00:09:02.918 ----------------------------------------------------- 00:09:02.918 Suppressions used: 00:09:02.918 count bytes template 00:09:02.918 1 32 /usr/src/fio/parse.c 00:09:02.918 1 8 libtcmalloc_minimal.so 00:09:02.918 ----------------------------------------------------- 00:09:02.918 00:09:02.918 06:32:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:02.918 06:32:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:02.918 06:32:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:02.918 06:32:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:02.918 06:32:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:02.918 06:32:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:02.918 06:32:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:02.918 06:32:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:02.918 06:32:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:03.178 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.179 fio-3.35 00:09:03.179 Starting 1 thread 00:09:09.764 00:09:09.764 test: (groupid=0, jobs=1): err= 0: pid=64340: Tue Nov 19 06:33:01 2024 00:09:09.764 read: IOPS=21.4k, BW=83.4MiB/s (87.5MB/s)(167MiB/2001msec) 00:09:09.764 slat (nsec): min=3922, max=76873, avg=5732.37, stdev=2128.91 00:09:09.764 clat (usec): min=245, max=7934, avg=2993.35, stdev=831.22 00:09:09.764 lat (usec): min=251, max=7957, avg=2999.08, stdev=832.44 00:09:09.764 clat percentiles (usec): 00:09:09.764 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:09.764 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2868], 00:09:09.764 | 70.00th=[ 2966], 80.00th=[ 3130], 90.00th=[ 3720], 95.00th=[ 4752], 00:09:09.764 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7439], 00:09:09.764 | 99.99th=[ 7701] 00:09:09.764 bw ( KiB/s): min=80888, max=86432, per=98.46%, avg=84109.33, stdev=2879.18, samples=3 00:09:09.764 iops : min=20222, max=21608, avg=21027.33, stdev=719.80, samples=3 00:09:09.764 write: IOPS=21.2k, BW=82.8MiB/s (86.8MB/s)(166MiB/2001msec); 0 zone resets 00:09:09.764 slat (nsec): min=4081, max=99131, avg=6028.78, stdev=2129.52 00:09:09.764 clat (usec): min=206, max=7872, avg=3003.20, stdev=817.42 00:09:09.764 lat (usec): min=211, max=7880, avg=3009.23, stdev=818.64 00:09:09.764 clat percentiles (usec): 00:09:09.764 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2573], 00:09:09.764 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2868], 00:09:09.764 | 70.00th=[ 2966], 80.00th=[ 3163], 90.00th=[ 3720], 95.00th=[ 4686], 00:09:09.764 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7439], 00:09:09.764 | 99.99th=[ 7635] 00:09:09.764 bw ( KiB/s): min=80784, max=86456, per=99.28%, avg=84176.00, stdev=2995.05, samples=3 00:09:09.764 iops : min=20196, max=21614, avg=21044.00, stdev=748.76, samples=3 00:09:09.764 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:09.764 lat (msec) : 2=0.46%, 4=91.89%, 10=7.60% 00:09:09.764 cpu : usr=99.25%, sys=0.10%, ctx=3, majf=0, minf=607 00:09:09.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:09.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.764 issued rwts: total=42732,42416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.764 00:09:09.764 Run status group 0 (all jobs): 00:09:09.764 READ: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:09:09.764 WRITE: bw=82.8MiB/s (86.8MB/s), 82.8MiB/s-82.8MiB/s (86.8MB/s-86.8MB/s), io=166MiB (174MB), run=2001-2001msec 00:09:09.764 ----------------------------------------------------- 00:09:09.764 Suppressions used: 00:09:09.764 count bytes template 00:09:09.764 1 32 /usr/src/fio/parse.c 00:09:09.764 1 8 libtcmalloc_minimal.so 00:09:09.764 ----------------------------------------------------- 00:09:09.764 00:09:09.764 06:33:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:09.764 06:33:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:09.764 06:33:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:09.764 06:33:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:09.764 06:33:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:09.764 06:33:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:10.025 06:33:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:10.025 06:33:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:10.025 06:33:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:10.286 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:10.286 fio-3.35 00:09:10.286 Starting 1 thread 00:09:18.429 00:09:18.429 test: (groupid=0, jobs=1): err= 0: pid=64402: Tue Nov 19 06:33:09 2024 00:09:18.429 read: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec) 00:09:18.429 slat (nsec): min=4040, max=94632, avg=6637.86, stdev=2964.08 00:09:18.429 clat (usec): min=265, max=10799, avg=3545.92, stdev=1141.50 00:09:18.429 lat (usec): min=271, max=10814, avg=3552.56, stdev=1143.01 00:09:18.429 clat percentiles (usec): 00:09:18.429 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2769], 00:09:18.429 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3163], 60.00th=[ 3392], 00:09:18.429 | 70.00th=[ 3654], 80.00th=[ 4047], 90.00th=[ 5145], 95.00th=[ 5997], 00:09:18.429 | 99.00th=[ 7635], 99.50th=[ 8291], 99.90th=[ 9765], 99.95th=[10028], 00:09:18.429 | 99.99th=[10552] 00:09:18.429 bw ( KiB/s): min=59208, max=74048, per=96.03%, avg=68760.00, stdev=8288.10, samples=3 00:09:18.429 iops : min=14802, max=18512, avg=17190.00, stdev=2072.03, samples=3 00:09:18.429 write: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec); 0 zone resets 00:09:18.429 slat (usec): min=4, max=104, avg= 7.06, stdev= 3.06 00:09:18.429 clat (usec): min=232, max=10921, avg=3575.12, stdev=1152.71 00:09:18.429 lat (usec): min=239, max=10963, avg=3582.18, stdev=1154.25 00:09:18.429 clat percentiles (usec): 00:09:18.429 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2802], 00:09:18.429 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3195], 60.00th=[ 3392], 00:09:18.429 | 70.00th=[ 3654], 80.00th=[ 4080], 90.00th=[ 5211], 95.00th=[ 6063], 00:09:18.429 | 99.00th=[ 7767], 99.50th=[ 8356], 99.90th=[ 9765], 99.95th=[10159], 00:09:18.429 | 99.99th=[10421] 00:09:18.429 bw ( KiB/s): min=59520, max=73696, per=95.83%, avg=68616.00, stdev=7895.31, samples=3 00:09:18.429 iops : min=14880, max=18424, avg=17154.00, stdev=1973.83, samples=3 00:09:18.429 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:18.429 lat (msec) : 2=0.23%, 4=78.65%, 10=21.01%, 20=0.06% 00:09:18.429 cpu : usr=98.95%, sys=0.20%, ctx=6, majf=0, minf=605 00:09:18.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:18.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.429 issued rwts: total=35818,35819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.429 00:09:18.429 Run status group 0 (all jobs): 00:09:18.429 READ: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:09:18.429 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:09:18.429 ----------------------------------------------------- 00:09:18.429 Suppressions used: 00:09:18.429 count bytes template 00:09:18.429 1 32 /usr/src/fio/parse.c 00:09:18.429 1 8 libtcmalloc_minimal.so 00:09:18.430 ----------------------------------------------------- 00:09:18.430 00:09:18.430 06:33:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:18.430 06:33:09 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:18.430 ************************************ 00:09:18.430 END TEST nvme_fio 00:09:18.430 ************************************ 00:09:18.430 00:09:18.430 real 0m27.677s 00:09:18.430 user 0m16.375s 00:09:18.430 sys 0m20.831s 00:09:18.430 06:33:09 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.430 06:33:09 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:18.430 00:09:18.430 real 1m38.192s 00:09:18.430 user 3m39.455s 00:09:18.430 sys 0m31.637s 00:09:18.430 ************************************ 00:09:18.430 END TEST nvme 00:09:18.430 ************************************ 00:09:18.430 06:33:09 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.430 06:33:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.430 06:33:09 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:18.430 06:33:09 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:18.430 06:33:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.430 06:33:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.430 06:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:18.430 ************************************ 00:09:18.430 START TEST nvme_scc 00:09:18.430 ************************************ 00:09:18.430 06:33:09 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:18.430 * Looking for test storage... 00:09:18.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.430 06:33:09 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.430 06:33:09 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.430 06:33:09 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.430 06:33:10 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:18.430 06:33:10 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.430 06:33:10 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.430 --rc genhtml_branch_coverage=1 00:09:18.430 --rc genhtml_function_coverage=1 00:09:18.430 --rc genhtml_legend=1 00:09:18.430 --rc geninfo_all_blocks=1 00:09:18.430 --rc geninfo_unexecuted_blocks=1 00:09:18.430 00:09:18.430 ' 00:09:18.430 06:33:10 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.430 --rc genhtml_branch_coverage=1 00:09:18.430 --rc genhtml_function_coverage=1 00:09:18.430 --rc genhtml_legend=1 00:09:18.430 --rc geninfo_all_blocks=1 00:09:18.430 --rc geninfo_unexecuted_blocks=1 00:09:18.430 00:09:18.430 ' 00:09:18.430 06:33:10 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.430 --rc genhtml_branch_coverage=1 00:09:18.430 --rc genhtml_function_coverage=1 00:09:18.430 --rc genhtml_legend=1 00:09:18.430 --rc geninfo_all_blocks=1 00:09:18.430 --rc geninfo_unexecuted_blocks=1 00:09:18.430 00:09:18.430 ' 00:09:18.430 06:33:10 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.430 --rc genhtml_branch_coverage=1 00:09:18.430 --rc genhtml_function_coverage=1 00:09:18.430 --rc genhtml_legend=1 00:09:18.430 --rc geninfo_all_blocks=1 00:09:18.430 --rc geninfo_unexecuted_blocks=1 00:09:18.430 00:09:18.430 ' 00:09:18.430 06:33:10 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.430 06:33:10 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.430 06:33:10 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.430 06:33:10 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.430 06:33:10 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.430 06:33:10 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:18.430 06:33:10 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:18.430 06:33:10 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:18.430 06:33:10 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.430 06:33:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:18.431 06:33:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:18.431 06:33:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:18.431 06:33:10 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:18.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.689 Waiting for block devices as requested 00:09:18.689 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.689 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.947 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.947 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.231 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.231 06:33:15 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:24.231 06:33:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.231 06:33:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:24.231 06:33:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.231 06:33:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.231 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.232 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.233 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.234 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.235 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:24.236 06:33:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.236 06:33:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:24.236 06:33:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.236 06:33:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.236 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.237 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.238 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:24.239 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.240 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:24.241 06:33:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.241 06:33:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:24.241 06:33:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.241 06:33:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.241 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:24.242 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:24.243 06:33:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:24.243 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.244 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:24.245 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:24.246 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.247 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.248 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:24.249 06:33:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.249 06:33:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:24.249 06:33:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.249 06:33:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:24.249 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:24.250 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.251 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:24.252 06:33:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:24.252 06:33:16 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:24.252 06:33:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:24.252 06:33:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:24.252 06:33:16 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:24.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.398 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.398 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.398 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.398 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.398 06:33:17 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:25.398 06:33:17 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:25.398 06:33:17 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.398 06:33:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:25.398 ************************************ 00:09:25.398 START TEST nvme_simple_copy 00:09:25.398 ************************************ 00:09:25.398 06:33:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:25.659 Initializing NVMe Controllers 00:09:25.659 Attaching to 0000:00:10.0 00:09:25.659 Controller supports SCC. Attached to 0000:00:10.0 00:09:25.659 Namespace ID: 1 size: 6GB 00:09:25.659 Initialization complete. 00:09:25.659 00:09:25.659 Controller QEMU NVMe Ctrl (12340 ) 00:09:25.659 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:25.659 Namespace Block Size:4096 00:09:25.659 Writing LBAs 0 to 63 with Random Data 00:09:25.659 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:25.659 LBAs matching Written Data: 64 00:09:25.659 00:09:25.659 real 0m0.298s 00:09:25.659 user 0m0.122s 00:09:25.659 sys 0m0.074s 00:09:25.659 ************************************ 00:09:25.659 06:33:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.659 06:33:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:25.659 END TEST nvme_simple_copy 00:09:25.659 ************************************ 00:09:25.659 00:09:25.659 real 0m7.688s 00:09:25.659 user 0m1.033s 00:09:25.659 sys 0m1.443s 00:09:25.659 06:33:17 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.659 ************************************ 00:09:25.659 END TEST nvme_scc 00:09:25.659 06:33:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:25.659 ************************************ 00:09:25.920 06:33:17 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:25.920 06:33:17 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:25.920 06:33:17 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:25.920 06:33:17 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:25.921 06:33:17 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:25.921 06:33:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.921 06:33:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.921 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:09:25.921 ************************************ 00:09:25.921 START TEST nvme_fdp 00:09:25.921 ************************************ 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:25.921 * Looking for test storage... 00:09:25.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.921 --rc genhtml_branch_coverage=1 00:09:25.921 --rc genhtml_function_coverage=1 00:09:25.921 --rc genhtml_legend=1 00:09:25.921 --rc geninfo_all_blocks=1 00:09:25.921 --rc geninfo_unexecuted_blocks=1 00:09:25.921 00:09:25.921 ' 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.921 --rc genhtml_branch_coverage=1 00:09:25.921 --rc genhtml_function_coverage=1 00:09:25.921 --rc genhtml_legend=1 00:09:25.921 --rc geninfo_all_blocks=1 00:09:25.921 --rc geninfo_unexecuted_blocks=1 00:09:25.921 00:09:25.921 ' 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.921 --rc genhtml_branch_coverage=1 00:09:25.921 --rc genhtml_function_coverage=1 00:09:25.921 --rc genhtml_legend=1 00:09:25.921 --rc geninfo_all_blocks=1 00:09:25.921 --rc geninfo_unexecuted_blocks=1 00:09:25.921 00:09:25.921 ' 00:09:25.921 06:33:17 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.921 --rc genhtml_branch_coverage=1 00:09:25.921 --rc genhtml_function_coverage=1 00:09:25.921 --rc genhtml_legend=1 00:09:25.921 --rc geninfo_all_blocks=1 00:09:25.921 --rc geninfo_unexecuted_blocks=1 00:09:25.921 00:09:25.921 ' 00:09:25.921 06:33:17 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.921 06:33:17 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.921 06:33:17 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.921 06:33:17 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.921 06:33:17 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.921 06:33:17 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:25.921 06:33:17 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:25.921 06:33:17 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:25.921 06:33:17 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.921 06:33:17 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:26.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.529 Waiting for block devices as requested 00:09:26.529 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.529 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.790 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.790 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.089 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:32.089 06:33:23 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:32.089 06:33:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.089 06:33:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:32.089 06:33:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.089 06:33:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.089 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:32.091 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:32.092 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:32.093 06:33:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:32.093 06:33:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.093 06:33:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:32.094 06:33:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.094 06:33:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:32.094 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:32.095 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:32.098 06:33:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.098 06:33:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:32.098 06:33:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.098 06:33:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.098 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:32.099 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:32.101 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.103 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.104 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.105 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:32.106 06:33:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.106 06:33:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:32.106 06:33:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.106 06:33:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.106 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:32.107 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.108 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:32.109 06:33:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:32.109 06:33:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:32.110 06:33:23 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:32.110 06:33:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:32.110 06:33:23 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:32.110 06:33:23 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.250 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.250 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.250 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.250 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.250 06:33:25 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:33.251 06:33:25 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.251 06:33:25 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.251 06:33:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:33.251 ************************************ 00:09:33.251 START TEST nvme_flexible_data_placement 00:09:33.251 ************************************ 00:09:33.251 06:33:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:33.511 Initializing NVMe Controllers 00:09:33.511 Attaching to 0000:00:13.0 00:09:33.511 Controller supports FDP Attached to 0000:00:13.0 00:09:33.511 Namespace ID: 1 Endurance Group ID: 1 00:09:33.511 Initialization complete. 00:09:33.511 00:09:33.511 ================================== 00:09:33.511 == FDP tests for Namespace: #01 == 00:09:33.511 ================================== 00:09:33.511 00:09:33.511 Get Feature: FDP: 00:09:33.511 ================= 00:09:33.511 Enabled: Yes 00:09:33.511 FDP configuration Index: 0 00:09:33.511 00:09:33.511 FDP configurations log page 00:09:33.511 =========================== 00:09:33.511 Number of FDP configurations: 1 00:09:33.511 Version: 0 00:09:33.511 Size: 112 00:09:33.511 FDP Configuration Descriptor: 0 00:09:33.511 Descriptor Size: 96 00:09:33.511 Reclaim Group Identifier format: 2 00:09:33.511 FDP Volatile Write Cache: Not Present 00:09:33.511 FDP Configuration: Valid 00:09:33.511 Vendor Specific Size: 0 00:09:33.511 Number of Reclaim Groups: 2 00:09:33.511 Number of Recalim Unit Handles: 8 00:09:33.511 Max Placement Identifiers: 128 00:09:33.511 Number of Namespaces Suppprted: 256 00:09:33.511 Reclaim unit Nominal Size: 6000000 bytes 00:09:33.511 Estimated Reclaim Unit Time Limit: Not Reported 00:09:33.511 RUH Desc #000: RUH Type: Initially Isolated 00:09:33.511 RUH Desc #001: RUH Type: Initially Isolated 00:09:33.511 RUH Desc #002: RUH Type: Initially Isolated 00:09:33.511 RUH Desc #003: RUH Type: Initially Isolated 00:09:33.511 RUH Desc #004: RUH Type: Initially Isolated 00:09:33.512 RUH Desc #005: RUH Type: Initially Isolated 00:09:33.512 RUH Desc #006: RUH Type: Initially Isolated 00:09:33.512 RUH Desc #007: RUH Type: Initially Isolated 00:09:33.512 00:09:33.512 FDP reclaim unit handle usage log page 00:09:33.512 ====================================== 00:09:33.512 Number of Reclaim Unit Handles: 8 00:09:33.512 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:33.512 RUH Usage Desc #001: RUH Attributes: Unused 00:09:33.512 RUH Usage Desc #002: RUH Attributes: Unused 00:09:33.512 RUH Usage Desc #003: RUH Attributes: Unused 00:09:33.512 RUH Usage Desc #004: RUH Attributes: Unused 00:09:33.512 RUH Usage Desc #005: RUH Attributes: Unused 00:09:33.512 RUH Usage Desc #006: RUH Attributes: Unused 00:09:33.512 RUH Usage Desc #007: RUH Attributes: Unused 00:09:33.512 00:09:33.512 FDP statistics log page 00:09:33.512 ======================= 00:09:33.512 Host bytes with metadata written: 1035952128 00:09:33.512 Media bytes with metadata written: 1036120064 00:09:33.512 Media bytes erased: 0 00:09:33.512 00:09:33.512 FDP Reclaim unit handle status 00:09:33.512 ============================== 00:09:33.512 Number of RUHS descriptors: 2 00:09:33.512 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000440a 00:09:33.512 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:33.512 00:09:33.512 FDP write on placement id: 0 success 00:09:33.512 00:09:33.512 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:33.512 00:09:33.512 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:33.512 00:09:33.512 Get Feature: FDP Events for Placement handle: #0 00:09:33.512 ======================== 00:09:33.512 Number of FDP Events: 6 00:09:33.512 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:33.512 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:33.512 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:33.512 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:33.512 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:33.512 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:33.512 00:09:33.512 FDP events log page 00:09:33.512 =================== 00:09:33.512 Number of FDP events: 1 00:09:33.512 FDP Event #0: 00:09:33.512 Event Type: RU Not Written to Capacity 00:09:33.512 Placement Identifier: Valid 00:09:33.512 NSID: Valid 00:09:33.512 Location: Valid 00:09:33.512 Placement Identifier: 0 00:09:33.512 Event Timestamp: 6 00:09:33.512 Namespace Identifier: 1 00:09:33.512 Reclaim Group Identifier: 0 00:09:33.512 Reclaim Unit Handle Identifier: 0 00:09:33.512 00:09:33.512 FDP test passed 00:09:33.512 00:09:33.512 real 0m0.237s 00:09:33.512 user 0m0.080s 00:09:33.512 sys 0m0.055s 00:09:33.512 06:33:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.512 ************************************ 00:09:33.512 06:33:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:33.512 END TEST nvme_flexible_data_placement 00:09:33.512 ************************************ 00:09:33.512 ************************************ 00:09:33.512 END TEST nvme_fdp 00:09:33.512 ************************************ 00:09:33.512 00:09:33.512 real 0m7.731s 00:09:33.512 user 0m1.011s 00:09:33.512 sys 0m1.491s 00:09:33.512 06:33:25 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.512 06:33:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:33.512 06:33:25 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:33.512 06:33:25 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:33.512 06:33:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.512 06:33:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.512 06:33:25 -- common/autotest_common.sh@10 -- # set +x 00:09:33.773 ************************************ 00:09:33.773 START TEST nvme_rpc 00:09:33.773 ************************************ 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:33.773 * Looking for test storage... 00:09:33.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.773 06:33:25 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.773 --rc genhtml_branch_coverage=1 00:09:33.773 --rc genhtml_function_coverage=1 00:09:33.773 --rc genhtml_legend=1 00:09:33.773 --rc geninfo_all_blocks=1 00:09:33.773 --rc geninfo_unexecuted_blocks=1 00:09:33.773 00:09:33.773 ' 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.773 --rc genhtml_branch_coverage=1 00:09:33.773 --rc genhtml_function_coverage=1 00:09:33.773 --rc genhtml_legend=1 00:09:33.773 --rc geninfo_all_blocks=1 00:09:33.773 --rc geninfo_unexecuted_blocks=1 00:09:33.773 00:09:33.773 ' 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.773 --rc genhtml_branch_coverage=1 00:09:33.773 --rc genhtml_function_coverage=1 00:09:33.773 --rc genhtml_legend=1 00:09:33.773 --rc geninfo_all_blocks=1 00:09:33.773 --rc geninfo_unexecuted_blocks=1 00:09:33.773 00:09:33.773 ' 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.773 --rc genhtml_branch_coverage=1 00:09:33.773 --rc genhtml_function_coverage=1 00:09:33.773 --rc genhtml_legend=1 00:09:33.773 --rc geninfo_all_blocks=1 00:09:33.773 --rc geninfo_unexecuted_blocks=1 00:09:33.773 00:09:33.773 ' 00:09:33.773 06:33:25 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.773 06:33:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:33.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.773 06:33:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:33.773 06:33:25 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65768 00:09:33.773 06:33:25 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:33.773 06:33:25 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65768 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65768 ']' 00:09:33.773 06:33:25 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.773 06:33:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.033 [2024-11-19 06:33:25.739341] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:34.034 [2024-11-19 06:33:25.739460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65768 ] 00:09:34.034 [2024-11-19 06:33:25.897916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.294 [2024-11-19 06:33:25.997237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.294 [2024-11-19 06:33:25.997304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.865 06:33:26 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.865 06:33:26 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:34.865 06:33:26 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:35.126 Nvme0n1 00:09:35.126 06:33:26 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:35.126 06:33:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:35.126 request: 00:09:35.126 { 00:09:35.126 "bdev_name": "Nvme0n1", 00:09:35.126 "filename": "non_existing_file", 00:09:35.126 "method": "bdev_nvme_apply_firmware", 00:09:35.126 "req_id": 1 00:09:35.126 } 00:09:35.126 Got JSON-RPC error response 00:09:35.126 response: 00:09:35.126 { 00:09:35.126 "code": -32603, 00:09:35.126 "message": "open file failed." 00:09:35.126 } 00:09:35.126 06:33:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:35.126 06:33:27 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:35.126 06:33:27 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:35.388 06:33:27 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:35.388 06:33:27 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65768 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65768 ']' 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65768 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65768 00:09:35.388 killing process with pid 65768 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65768' 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65768 00:09:35.388 06:33:27 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65768 00:09:36.770 ************************************ 00:09:36.770 END TEST nvme_rpc 00:09:36.770 ************************************ 00:09:36.770 00:09:36.770 real 0m3.192s 00:09:36.770 user 0m6.045s 00:09:36.770 sys 0m0.511s 00:09:36.770 06:33:28 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.770 06:33:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.770 06:33:28 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:36.770 06:33:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.770 06:33:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.770 06:33:28 -- common/autotest_common.sh@10 -- # set +x 00:09:36.770 ************************************ 00:09:36.770 START TEST nvme_rpc_timeouts 00:09:36.770 ************************************ 00:09:36.770 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:37.030 * Looking for test storage... 00:09:37.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.030 06:33:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.030 --rc genhtml_branch_coverage=1 00:09:37.030 --rc genhtml_function_coverage=1 00:09:37.030 --rc genhtml_legend=1 00:09:37.030 --rc geninfo_all_blocks=1 00:09:37.030 --rc geninfo_unexecuted_blocks=1 00:09:37.030 00:09:37.030 ' 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.030 --rc genhtml_branch_coverage=1 00:09:37.030 --rc genhtml_function_coverage=1 00:09:37.030 --rc genhtml_legend=1 00:09:37.030 --rc geninfo_all_blocks=1 00:09:37.030 --rc geninfo_unexecuted_blocks=1 00:09:37.030 00:09:37.030 ' 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.030 --rc genhtml_branch_coverage=1 00:09:37.030 --rc genhtml_function_coverage=1 00:09:37.030 --rc genhtml_legend=1 00:09:37.030 --rc geninfo_all_blocks=1 00:09:37.030 --rc geninfo_unexecuted_blocks=1 00:09:37.030 00:09:37.030 ' 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.030 --rc genhtml_branch_coverage=1 00:09:37.030 --rc genhtml_function_coverage=1 00:09:37.030 --rc genhtml_legend=1 00:09:37.030 --rc geninfo_all_blocks=1 00:09:37.030 --rc geninfo_unexecuted_blocks=1 00:09:37.030 00:09:37.030 ' 00:09:37.030 06:33:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.030 06:33:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65832 00:09:37.030 06:33:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65832 00:09:37.030 06:33:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65865 00:09:37.030 06:33:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:37.030 06:33:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65865 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65865 ']' 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.030 06:33:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:37.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.030 06:33:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:37.030 [2024-11-19 06:33:28.908707] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:37.030 [2024-11-19 06:33:28.908831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65865 ] 00:09:37.288 [2024-11-19 06:33:29.066245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.288 [2024-11-19 06:33:29.167450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.288 [2024-11-19 06:33:29.167467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.851 06:33:29 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.851 06:33:29 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:37.851 06:33:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:37.851 Checking default timeout settings: 00:09:37.852 06:33:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:38.416 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:38.416 Making settings changes with rpc: 00:09:38.416 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:38.416 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:38.416 Check default vs. modified settings: 00:09:38.416 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65832 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65832 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:38.673 Setting action_on_timeout is changed as expected. 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65832 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.673 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65832 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:38.674 Setting timeout_us is changed as expected. 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65832 00:09:38.674 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65832 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.932 Setting timeout_admin_us is changed as expected. 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65832 /tmp/settings_modified_65832 00:09:38.932 06:33:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65865 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65865 ']' 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65865 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65865 00:09:38.932 killing process with pid 65865 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65865' 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65865 00:09:38.932 06:33:30 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65865 00:09:40.307 RPC TIMEOUT SETTING TEST PASSED. 00:09:40.307 06:33:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:40.307 00:09:40.307 real 0m3.217s 00:09:40.307 user 0m6.168s 00:09:40.307 sys 0m0.552s 00:09:40.307 ************************************ 00:09:40.307 END TEST nvme_rpc_timeouts 00:09:40.307 ************************************ 00:09:40.307 06:33:31 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.307 06:33:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:40.307 06:33:31 -- spdk/autotest.sh@239 -- # uname -s 00:09:40.307 06:33:31 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:40.307 06:33:31 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:40.307 06:33:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.307 06:33:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.307 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:09:40.307 ************************************ 00:09:40.307 START TEST sw_hotplug 00:09:40.307 ************************************ 00:09:40.307 06:33:31 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:40.307 * Looking for test storage... 00:09:40.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:40.307 06:33:32 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:40.307 06:33:32 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:40.307 06:33:32 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:40.307 06:33:32 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.307 06:33:32 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:40.307 06:33:32 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.307 06:33:32 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:40.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.307 --rc genhtml_branch_coverage=1 00:09:40.307 --rc genhtml_function_coverage=1 00:09:40.307 --rc genhtml_legend=1 00:09:40.307 --rc geninfo_all_blocks=1 00:09:40.307 --rc geninfo_unexecuted_blocks=1 00:09:40.307 00:09:40.307 ' 00:09:40.307 06:33:32 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:40.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.307 --rc genhtml_branch_coverage=1 00:09:40.307 --rc genhtml_function_coverage=1 00:09:40.308 --rc genhtml_legend=1 00:09:40.308 --rc geninfo_all_blocks=1 00:09:40.308 --rc geninfo_unexecuted_blocks=1 00:09:40.308 00:09:40.308 ' 00:09:40.308 06:33:32 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:40.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.308 --rc genhtml_branch_coverage=1 00:09:40.308 --rc genhtml_function_coverage=1 00:09:40.308 --rc genhtml_legend=1 00:09:40.308 --rc geninfo_all_blocks=1 00:09:40.308 --rc geninfo_unexecuted_blocks=1 00:09:40.308 00:09:40.308 ' 00:09:40.308 06:33:32 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:40.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.308 --rc genhtml_branch_coverage=1 00:09:40.308 --rc genhtml_function_coverage=1 00:09:40.308 --rc genhtml_legend=1 00:09:40.308 --rc geninfo_all_blocks=1 00:09:40.308 --rc geninfo_unexecuted_blocks=1 00:09:40.308 00:09:40.308 ' 00:09:40.308 06:33:32 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:40.566 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:40.566 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:40.566 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:40.566 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:40.825 06:33:32 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:40.825 06:33:32 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:40.825 06:33:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:40.825 06:33:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:40.825 06:33:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:40.826 06:33:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:40.826 06:33:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:40.826 06:33:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:40.826 06:33:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:40.826 06:33:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:40.826 06:33:32 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:40.826 06:33:32 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:40.826 06:33:32 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:40.826 06:33:32 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:40.826 06:33:32 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:41.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.084 Waiting for block devices as requested 00:09:41.388 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.388 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.388 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.388 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.696 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:46.696 06:33:38 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:46.696 06:33:38 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:46.958 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:46.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.958 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:47.219 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:47.480 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.480 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.480 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:47.480 06:33:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66723 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:47.741 06:33:39 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:47.741 06:33:39 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:47.741 06:33:39 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:47.741 06:33:39 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:47.741 06:33:39 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:47.741 06:33:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:48.001 Initializing NVMe Controllers 00:09:48.001 Attaching to 0000:00:10.0 00:09:48.001 Attaching to 0000:00:11.0 00:09:48.001 Attached to 0000:00:11.0 00:09:48.001 Attached to 0000:00:10.0 00:09:48.001 Initialization complete. Starting I/O... 00:09:48.001 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:48.001 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:48.001 00:09:48.945 QEMU NVMe Ctrl (12341 ): 2005 I/Os completed (+2005) 00:09:48.945 QEMU NVMe Ctrl (12340 ): 2096 I/Os completed (+2096) 00:09:48.945 00:09:49.886 QEMU NVMe Ctrl (12341 ): 5645 I/Os completed (+3640) 00:09:49.886 QEMU NVMe Ctrl (12340 ): 5746 I/Os completed (+3650) 00:09:49.886 00:09:50.829 QEMU NVMe Ctrl (12341 ): 9440 I/Os completed (+3795) 00:09:50.829 QEMU NVMe Ctrl (12340 ): 9529 I/Os completed (+3783) 00:09:50.829 00:09:51.772 QEMU NVMe Ctrl (12341 ): 13229 I/Os completed (+3789) 00:09:51.772 QEMU NVMe Ctrl (12340 ): 13313 I/Os completed (+3784) 00:09:51.772 00:09:53.157 QEMU NVMe Ctrl (12341 ): 16445 I/Os completed (+3216) 00:09:53.157 QEMU NVMe Ctrl (12340 ): 16529 I/Os completed (+3216) 00:09:53.157 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:53.730 [2024-11-19 06:33:45.477604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:53.730 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:53.730 [2024-11-19 06:33:45.478860] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.478918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.478948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.478965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:53.730 [2024-11-19 06:33:45.480725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.480874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.480893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.480907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:09:53.730 EAL: Scan for (pci) bus failed. 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:53.730 [2024-11-19 06:33:45.503378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:53.730 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:53.730 [2024-11-19 06:33:45.504626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.504813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.504863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.504892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:53.730 [2024-11-19 06:33:45.508237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.508342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.508364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 [2024-11-19 06:33:45.508376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.730 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:53.730 EAL: Scan for (pci) bus failed. 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:53.730 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:53.991 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:53.991 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:53.991 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:53.991 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:53.991 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:53.991 Attaching to 0000:00:10.0 00:09:53.991 Attached to 0000:00:10.0 00:09:53.991 QEMU NVMe Ctrl (12340 ): 32 I/Os completed (+32) 00:09:53.991 00:09:53.991 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:53.991 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:53.991 06:33:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:53.991 Attaching to 0000:00:11.0 00:09:53.991 Attached to 0000:00:11.0 00:09:54.934 QEMU NVMe Ctrl (12340 ): 3236 I/Os completed (+3204) 00:09:54.934 QEMU NVMe Ctrl (12341 ): 2948 I/Os completed (+2948) 00:09:54.934 00:09:55.876 QEMU NVMe Ctrl (12340 ): 6006 I/Os completed (+2770) 00:09:55.876 QEMU NVMe Ctrl (12341 ): 6073 I/Os completed (+3125) 00:09:55.876 00:09:56.872 QEMU NVMe Ctrl (12340 ): 9286 I/Os completed (+3280) 00:09:56.872 QEMU NVMe Ctrl (12341 ): 9353 I/Os completed (+3280) 00:09:56.872 00:09:57.835 QEMU NVMe Ctrl (12340 ): 12496 I/Os completed (+3210) 00:09:57.835 QEMU NVMe Ctrl (12341 ): 12571 I/Os completed (+3218) 00:09:57.835 00:09:58.779 QEMU NVMe Ctrl (12340 ): 15520 I/Os completed (+3024) 00:09:58.779 QEMU NVMe Ctrl (12341 ): 15605 I/Os completed (+3034) 00:09:58.779 00:10:00.167 QEMU NVMe Ctrl (12340 ): 18144 I/Os completed (+2624) 00:10:00.167 QEMU NVMe Ctrl (12341 ): 18229 I/Os completed (+2624) 00:10:00.167 00:10:01.110 QEMU NVMe Ctrl (12340 ): 21729 I/Os completed (+3585) 00:10:01.110 QEMU NVMe Ctrl (12341 ): 21873 I/Os completed (+3644) 00:10:01.110 00:10:02.054 QEMU NVMe Ctrl (12340 ): 25392 I/Os completed (+3663) 00:10:02.054 QEMU NVMe Ctrl (12341 ): 25527 I/Os completed (+3654) 00:10:02.054 00:10:02.998 QEMU NVMe Ctrl (12340 ): 28484 I/Os completed (+3092) 00:10:02.998 QEMU NVMe Ctrl (12341 ): 28716 I/Os completed (+3189) 00:10:02.998 00:10:03.941 QEMU NVMe Ctrl (12340 ): 31910 I/Os completed (+3426) 00:10:03.941 QEMU NVMe Ctrl (12341 ): 32228 I/Os completed (+3512) 00:10:03.941 00:10:04.886 QEMU NVMe Ctrl (12340 ): 35648 I/Os completed (+3738) 00:10:04.886 QEMU NVMe Ctrl (12341 ): 35942 I/Os completed (+3714) 00:10:04.886 00:10:05.830 QEMU NVMe Ctrl (12340 ): 39363 I/Os completed (+3715) 00:10:05.830 QEMU NVMe Ctrl (12341 ): 39605 I/Os completed (+3663) 00:10:05.830 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.091 [2024-11-19 06:33:57.786378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:06.091 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:06.091 [2024-11-19 06:33:57.789295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.789348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.789364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.789378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:06.091 [2024-11-19 06:33:57.791207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.791256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.791269] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.791282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.091 [2024-11-19 06:33:57.809200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:06.091 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:06.091 [2024-11-19 06:33:57.810069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.810099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.810117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.810131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:06.091 [2024-11-19 06:33:57.811509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.811541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.811554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 [2024-11-19 06:33:57.811569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:06.091 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:06.091 EAL: Scan for (pci) bus failed. 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.091 06:33:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:06.091 06:33:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:06.352 06:33:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.352 06:33:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.352 06:33:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.352 06:33:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:06.352 Attaching to 0000:00:10.0 00:10:06.352 Attached to 0000:00:10.0 00:10:06.352 06:33:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:06.352 06:33:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.352 06:33:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:06.352 Attaching to 0000:00:11.0 00:10:06.352 Attached to 0000:00:11.0 00:10:06.925 QEMU NVMe Ctrl (12340 ): 2546 I/Os completed (+2546) 00:10:06.925 QEMU NVMe Ctrl (12341 ): 2223 I/Os completed (+2223) 00:10:06.925 00:10:07.868 QEMU NVMe Ctrl (12340 ): 6279 I/Os completed (+3733) 00:10:07.868 QEMU NVMe Ctrl (12341 ): 5874 I/Os completed (+3651) 00:10:07.868 00:10:08.814 QEMU NVMe Ctrl (12340 ): 10155 I/Os completed (+3876) 00:10:08.814 QEMU NVMe Ctrl (12341 ): 9750 I/Os completed (+3876) 00:10:08.814 00:10:10.200 QEMU NVMe Ctrl (12340 ): 13907 I/Os completed (+3752) 00:10:10.200 QEMU NVMe Ctrl (12341 ): 13434 I/Os completed (+3684) 00:10:10.200 00:10:10.773 QEMU NVMe Ctrl (12340 ): 17767 I/Os completed (+3860) 00:10:10.773 QEMU NVMe Ctrl (12341 ): 17301 I/Os completed (+3867) 00:10:10.773 00:10:12.154 QEMU NVMe Ctrl (12340 ): 21502 I/Os completed (+3735) 00:10:12.154 QEMU NVMe Ctrl (12341 ): 20948 I/Os completed (+3647) 00:10:12.154 00:10:13.090 QEMU NVMe Ctrl (12340 ): 25884 I/Os completed (+4382) 00:10:13.090 QEMU NVMe Ctrl (12341 ): 25408 I/Os completed (+4460) 00:10:13.090 00:10:14.026 QEMU NVMe Ctrl (12340 ): 29593 I/Os completed (+3709) 00:10:14.026 QEMU NVMe Ctrl (12341 ): 29448 I/Os completed (+4040) 00:10:14.026 00:10:14.963 QEMU NVMe Ctrl (12340 ): 32505 I/Os completed (+2912) 00:10:14.963 QEMU NVMe Ctrl (12341 ): 32427 I/Os completed (+2979) 00:10:14.963 00:10:15.904 QEMU NVMe Ctrl (12340 ): 36084 I/Os completed (+3579) 00:10:15.904 QEMU NVMe Ctrl (12341 ): 36104 I/Os completed (+3677) 00:10:15.904 00:10:16.849 QEMU NVMe Ctrl (12340 ): 39769 I/Os completed (+3685) 00:10:16.849 QEMU NVMe Ctrl (12341 ): 39794 I/Os completed (+3690) 00:10:16.849 00:10:17.793 QEMU NVMe Ctrl (12340 ): 43454 I/Os completed (+3685) 00:10:17.793 QEMU NVMe Ctrl (12341 ): 43364 I/Os completed (+3570) 00:10:17.793 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.365 [2024-11-19 06:34:10.122950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:18.365 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:18.365 [2024-11-19 06:34:10.124614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.124765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.124803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.124866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:18.365 [2024-11-19 06:34:10.127042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.127150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.127189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.127257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.365 [2024-11-19 06:34:10.143232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:18.365 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:18.365 [2024-11-19 06:34:10.144380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.144511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.144579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.144612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:18.365 [2024-11-19 06:34:10.146430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.146518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.146561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 [2024-11-19 06:34:10.146639] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.365 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:18.626 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:18.626 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.626 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.626 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.626 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:18.626 Attaching to 0000:00:10.0 00:10:18.626 Attached to 0000:00:10.0 00:10:18.626 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:18.626 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.626 06:34:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:18.626 Attaching to 0000:00:11.0 00:10:18.626 Attached to 0000:00:11.0 00:10:18.626 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:18.626 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:18.626 [2024-11-19 06:34:10.406065] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:30.926 06:34:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:30.926 06:34:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:30.926 06:34:22 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.92 00:10:30.926 06:34:22 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.92 00:10:30.926 06:34:22 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:30.926 06:34:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.92 00:10:30.926 06:34:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.92 2 00:10:30.926 remove_attach_helper took 42.92s to complete (handling 2 nvme drive(s)) 06:34:22 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:37.512 06:34:28 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66723 00:10:37.512 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66723) - No such process 00:10:37.512 06:34:28 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66723 00:10:37.512 06:34:28 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:37.512 06:34:28 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:37.512 06:34:28 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:37.512 06:34:28 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67271 00:10:37.512 06:34:28 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:37.512 06:34:28 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67271 00:10:37.512 06:34:28 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67271 ']' 00:10:37.512 06:34:28 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:37.512 06:34:28 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.512 06:34:28 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.512 06:34:28 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.512 06:34:28 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.512 06:34:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.512 [2024-11-19 06:34:28.478887] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:37.512 [2024-11-19 06:34:28.479199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67271 ] 00:10:37.512 [2024-11-19 06:34:28.641576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.512 [2024-11-19 06:34:28.736216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:37.512 06:34:29 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.512 06:34:29 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:37.512 06:34:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:37.512 06:34:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:37.512 06:34:29 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:37.512 06:34:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:37.512 06:34:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:37.512 06:34:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:37.512 06:34:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:37.512 06:34:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.072 06:34:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.072 06:34:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 06:34:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:44.072 [2024-11-19 06:34:35.415558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:44.072 [2024-11-19 06:34:35.416766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.072 [2024-11-19 06:34:35.416803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.072 [2024-11-19 06:34:35.416816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.072 [2024-11-19 06:34:35.416833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.072 [2024-11-19 06:34:35.416840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.072 [2024-11-19 06:34:35.416849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.072 [2024-11-19 06:34:35.416856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.072 [2024-11-19 06:34:35.416865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.072 [2024-11-19 06:34:35.416871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.072 [2024-11-19 06:34:35.416882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.072 [2024-11-19 06:34:35.416889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.072 [2024-11-19 06:34:35.416896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.072 06:34:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.072 06:34:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 [2024-11-19 06:34:35.915551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:44.072 [2024-11-19 06:34:35.916704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.072 [2024-11-19 06:34:35.916728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.072 [2024-11-19 06:34:35.916739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.072 [2024-11-19 06:34:35.916751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.072 [2024-11-19 06:34:35.916760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.072 [2024-11-19 06:34:35.916767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.072 [2024-11-19 06:34:35.916775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.072 [2024-11-19 06:34:35.916782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.072 [2024-11-19 06:34:35.916789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.072 [2024-11-19 06:34:35.916796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.072 [2024-11-19 06:34:35.916803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.072 [2024-11-19 06:34:35.916810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.072 06:34:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:44.072 06:34:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.639 06:34:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.639 06:34:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.639 06:34:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.639 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:44.897 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:44.897 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:44.897 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.897 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.897 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:44.897 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:44.897 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:44.897 06:34:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.092 06:34:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.092 06:34:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.092 06:34:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.092 06:34:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.092 06:34:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.092 06:34:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.092 [2024-11-19 06:34:48.815744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:57.092 [2024-11-19 06:34:48.816993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.092 [2024-11-19 06:34:48.817027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.092 [2024-11-19 06:34:48.817037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.092 [2024-11-19 06:34:48.817054] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.092 [2024-11-19 06:34:48.817061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.092 [2024-11-19 06:34:48.817070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.092 [2024-11-19 06:34:48.817078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.092 [2024-11-19 06:34:48.817085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.092 [2024-11-19 06:34:48.817091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.092 [2024-11-19 06:34:48.817100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.092 [2024-11-19 06:34:48.817106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.092 [2024-11-19 06:34:48.817114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:57.092 06:34:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:57.659 [2024-11-19 06:34:49.315740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:57.659 [2024-11-19 06:34:49.316867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.659 [2024-11-19 06:34:49.316897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.659 [2024-11-19 06:34:49.316909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.659 [2024-11-19 06:34:49.316920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.659 [2024-11-19 06:34:49.316941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.659 [2024-11-19 06:34:49.316949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.659 [2024-11-19 06:34:49.316957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.659 [2024-11-19 06:34:49.316964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.659 [2024-11-19 06:34:49.316972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.659 [2024-11-19 06:34:49.316978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.659 [2024-11-19 06:34:49.316986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.659 [2024-11-19 06:34:49.316992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.659 06:34:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.659 06:34:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.659 06:34:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:57.659 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:57.918 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.918 06:34:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.119 06:35:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.119 06:35:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.119 06:35:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.119 06:35:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.119 06:35:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.119 06:35:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.119 06:35:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.119 [2024-11-19 06:35:01.715964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:10.119 [2024-11-19 06:35:01.717194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.119 [2024-11-19 06:35:01.717227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.119 [2024-11-19 06:35:01.717237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.119 [2024-11-19 06:35:01.717253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.119 [2024-11-19 06:35:01.717260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.119 [2024-11-19 06:35:01.717270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.119 [2024-11-19 06:35:01.717277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.119 [2024-11-19 06:35:01.717285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.119 [2024-11-19 06:35:01.717292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.119 [2024-11-19 06:35:01.717299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.119 [2024-11-19 06:35:01.717306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.119 [2024-11-19 06:35:01.717313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.378 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.378 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.378 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.378 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.378 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.378 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.378 06:35:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.378 06:35:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.378 [2024-11-19 06:35:02.215965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:10.378 [2024-11-19 06:35:02.217093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.378 [2024-11-19 06:35:02.217122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.378 [2024-11-19 06:35:02.217134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.378 [2024-11-19 06:35:02.217146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.378 [2024-11-19 06:35:02.217154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.378 [2024-11-19 06:35:02.217162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.378 [2024-11-19 06:35:02.217170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.378 [2024-11-19 06:35:02.217177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.378 [2024-11-19 06:35:02.217186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.378 [2024-11-19 06:35:02.217192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.378 [2024-11-19 06:35:02.217200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.378 [2024-11-19 06:35:02.217206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.378 06:35:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.378 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:10.378 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.960 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.961 06:35:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.961 06:35:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.961 06:35:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.961 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:11.219 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:11.219 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.219 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.219 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.219 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:11.219 06:35:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:11.219 06:35:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.219 06:35:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.72 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.72 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:11:23.422 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:23.422 06:35:15 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:23.422 06:35:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:30.008 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.008 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.008 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.009 06:35:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.009 06:35:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.009 06:35:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:30.009 [2024-11-19 06:35:21.171539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:30.009 [2024-11-19 06:35:21.172430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.009 [2024-11-19 06:35:21.172460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.009 [2024-11-19 06:35:21.172471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.009 [2024-11-19 06:35:21.172486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.009 [2024-11-19 06:35:21.172493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.009 [2024-11-19 06:35:21.172501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.009 [2024-11-19 06:35:21.172508] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.009 [2024-11-19 06:35:21.172516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.009 [2024-11-19 06:35:21.172522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.009 [2024-11-19 06:35:21.172531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.009 [2024-11-19 06:35:21.172537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.009 [2024-11-19 06:35:21.172546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.009 [2024-11-19 06:35:21.571536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:30.009 [2024-11-19 06:35:21.572463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.009 [2024-11-19 06:35:21.572491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.009 [2024-11-19 06:35:21.572501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.009 [2024-11-19 06:35:21.572513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.009 [2024-11-19 06:35:21.572522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.009 [2024-11-19 06:35:21.572529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.009 [2024-11-19 06:35:21.572537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.009 [2024-11-19 06:35:21.572543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.009 [2024-11-19 06:35:21.572551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.009 [2024-11-19 06:35:21.572558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.009 [2024-11-19 06:35:21.572566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.009 [2024-11-19 06:35:21.572572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.009 06:35:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.009 06:35:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.009 06:35:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:30.009 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:30.268 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.269 06:35:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.470 06:35:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.470 06:35:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.470 06:35:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.470 06:35:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.470 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.470 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.470 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:42.470 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.470 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.470 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.471 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.471 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.471 06:35:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.471 06:35:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.471 06:35:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.471 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:42.471 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:42.471 [2024-11-19 06:35:34.071782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:42.471 [2024-11-19 06:35:34.072682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.471 [2024-11-19 06:35:34.072712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.471 [2024-11-19 06:35:34.072722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.471 [2024-11-19 06:35:34.072740] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.471 [2024-11-19 06:35:34.072747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.471 [2024-11-19 06:35:34.072755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.471 [2024-11-19 06:35:34.072762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.471 [2024-11-19 06:35:34.072771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.471 [2024-11-19 06:35:34.072777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.471 [2024-11-19 06:35:34.072786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.471 [2024-11-19 06:35:34.072792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.471 [2024-11-19 06:35:34.072800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.730 [2024-11-19 06:35:34.471777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:42.730 [2024-11-19 06:35:34.472626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.730 [2024-11-19 06:35:34.472651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.730 [2024-11-19 06:35:34.472663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.730 [2024-11-19 06:35:34.472675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.730 [2024-11-19 06:35:34.472685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.730 [2024-11-19 06:35:34.472692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.730 [2024-11-19 06:35:34.472700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.730 [2024-11-19 06:35:34.472707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.730 [2024-11-19 06:35:34.472715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.730 [2024-11-19 06:35:34.472722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.730 [2024-11-19 06:35:34.472730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.730 [2024-11-19 06:35:34.472736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.730 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:42.730 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.730 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.730 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.730 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.730 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.730 06:35:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.730 06:35:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.730 06:35:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.730 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:42.730 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.988 06:35:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.189 06:35:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.189 06:35:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.189 06:35:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.189 06:35:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.189 06:35:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.189 06:35:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:55.189 06:35:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:55.189 [2024-11-19 06:35:46.971988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:55.189 [2024-11-19 06:35:46.973092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.189 [2024-11-19 06:35:46.973118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.189 [2024-11-19 06:35:46.973129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.189 [2024-11-19 06:35:46.973146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.189 [2024-11-19 06:35:46.973154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.189 [2024-11-19 06:35:46.973162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.189 [2024-11-19 06:35:46.973169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.189 [2024-11-19 06:35:46.973180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.189 [2024-11-19 06:35:46.973187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.189 [2024-11-19 06:35:46.973195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.189 [2024-11-19 06:35:46.973201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.189 [2024-11-19 06:35:46.973209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.448 [2024-11-19 06:35:47.371986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:55.448 [2024-11-19 06:35:47.372820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.448 [2024-11-19 06:35:47.372845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.448 [2024-11-19 06:35:47.372857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.448 [2024-11-19 06:35:47.372868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.448 [2024-11-19 06:35:47.372876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.448 [2024-11-19 06:35:47.372883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.448 [2024-11-19 06:35:47.372892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.448 [2024-11-19 06:35:47.372898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.448 [2024-11-19 06:35:47.372906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.448 [2024-11-19 06:35:47.372913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.448 [2024-11-19 06:35:47.372922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.448 [2024-11-19 06:35:47.372938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.706 06:35:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.706 06:35:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.706 06:35:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.706 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:55.964 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:55.964 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.964 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.965 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.965 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:55.965 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:55.965 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.965 06:35:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.69 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.69 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.69 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.69 2 00:12:08.188 remove_attach_helper took 44.69s to complete (handling 2 nvme drive(s)) 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:08.188 06:35:59 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67271 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67271 ']' 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67271 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67271 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.188 killing process with pid 67271 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67271' 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67271 00:12:08.188 06:35:59 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67271 00:12:09.126 06:36:00 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:09.384 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:09.956 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.956 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.956 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:09.956 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:10.218 00:12:10.218 real 2m29.965s 00:12:10.218 user 1m52.125s 00:12:10.218 sys 0m16.491s 00:12:10.218 06:36:01 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.218 ************************************ 00:12:10.219 06:36:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.219 END TEST sw_hotplug 00:12:10.219 ************************************ 00:12:10.219 06:36:01 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:10.219 06:36:01 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:10.219 06:36:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.219 06:36:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.219 06:36:01 -- common/autotest_common.sh@10 -- # set +x 00:12:10.219 ************************************ 00:12:10.219 START TEST nvme_xnvme 00:12:10.219 ************************************ 00:12:10.219 06:36:01 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:10.219 * Looking for test storage... 00:12:10.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:10.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.219 --rc genhtml_branch_coverage=1 00:12:10.219 --rc genhtml_function_coverage=1 00:12:10.219 --rc genhtml_legend=1 00:12:10.219 --rc geninfo_all_blocks=1 00:12:10.219 --rc geninfo_unexecuted_blocks=1 00:12:10.219 00:12:10.219 ' 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:10.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.219 --rc genhtml_branch_coverage=1 00:12:10.219 --rc genhtml_function_coverage=1 00:12:10.219 --rc genhtml_legend=1 00:12:10.219 --rc geninfo_all_blocks=1 00:12:10.219 --rc geninfo_unexecuted_blocks=1 00:12:10.219 00:12:10.219 ' 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:10.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.219 --rc genhtml_branch_coverage=1 00:12:10.219 --rc genhtml_function_coverage=1 00:12:10.219 --rc genhtml_legend=1 00:12:10.219 --rc geninfo_all_blocks=1 00:12:10.219 --rc geninfo_unexecuted_blocks=1 00:12:10.219 00:12:10.219 ' 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:10.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.219 --rc genhtml_branch_coverage=1 00:12:10.219 --rc genhtml_function_coverage=1 00:12:10.219 --rc genhtml_legend=1 00:12:10.219 --rc geninfo_all_blocks=1 00:12:10.219 --rc geninfo_unexecuted_blocks=1 00:12:10.219 00:12:10.219 ' 00:12:10.219 06:36:02 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.219 06:36:02 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.219 06:36:02 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.219 06:36:02 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.219 06:36:02 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.219 06:36:02 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:10.219 06:36:02 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.219 06:36:02 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.219 06:36:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.480 ************************************ 00:12:10.480 START TEST xnvme_to_malloc_dd_copy 00:12:10.480 ************************************ 00:12:10.480 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:10.480 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:10.481 06:36:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:10.481 { 00:12:10.481 "subsystems": [ 00:12:10.481 { 00:12:10.481 "subsystem": "bdev", 00:12:10.481 "config": [ 00:12:10.481 { 00:12:10.481 "params": { 00:12:10.481 "block_size": 512, 00:12:10.481 "num_blocks": 2097152, 00:12:10.481 "name": "malloc0" 00:12:10.481 }, 00:12:10.481 "method": "bdev_malloc_create" 00:12:10.481 }, 00:12:10.481 { 00:12:10.481 "params": { 00:12:10.481 "io_mechanism": "libaio", 00:12:10.481 "filename": "/dev/nullb0", 00:12:10.481 "name": "null0" 00:12:10.481 }, 00:12:10.481 "method": "bdev_xnvme_create" 00:12:10.481 }, 00:12:10.481 { 00:12:10.481 "method": "bdev_wait_for_examine" 00:12:10.481 } 00:12:10.481 ] 00:12:10.481 } 00:12:10.481 ] 00:12:10.481 } 00:12:10.481 [2024-11-19 06:36:02.270093] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:10.481 [2024-11-19 06:36:02.270228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68644 ] 00:12:10.742 [2024-11-19 06:36:02.434377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.743 [2024-11-19 06:36:02.556784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.293  [2024-11-19T06:36:05.794Z] Copying: 226/1024 [MB] (226 MBps) [2024-11-19T06:36:06.729Z] Copying: 452/1024 [MB] (226 MBps) [2024-11-19T06:36:07.662Z] Copying: 727/1024 [MB] (274 MBps) [2024-11-19T06:36:09.564Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:12:17.635 00:12:17.895 06:36:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:17.895 06:36:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:17.895 06:36:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:17.895 06:36:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:17.895 { 00:12:17.895 "subsystems": [ 00:12:17.895 { 00:12:17.895 "subsystem": "bdev", 00:12:17.895 "config": [ 00:12:17.895 { 00:12:17.895 "params": { 00:12:17.895 "block_size": 512, 00:12:17.895 "num_blocks": 2097152, 00:12:17.895 "name": "malloc0" 00:12:17.895 }, 00:12:17.895 "method": "bdev_malloc_create" 00:12:17.895 }, 00:12:17.895 { 00:12:17.895 "params": { 00:12:17.895 "io_mechanism": "libaio", 00:12:17.895 "filename": "/dev/nullb0", 00:12:17.895 "name": "null0" 00:12:17.895 }, 00:12:17.895 "method": "bdev_xnvme_create" 00:12:17.895 }, 00:12:17.895 { 00:12:17.895 "method": "bdev_wait_for_examine" 00:12:17.895 } 00:12:17.895 ] 00:12:17.895 } 00:12:17.895 ] 00:12:17.895 } 00:12:17.895 [2024-11-19 06:36:09.633363] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:17.895 [2024-11-19 06:36:09.633484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68737 ] 00:12:17.895 [2024-11-19 06:36:09.789162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.153 [2024-11-19 06:36:09.876742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.079  [2024-11-19T06:36:12.942Z] Copying: 305/1024 [MB] (305 MBps) [2024-11-19T06:36:13.877Z] Copying: 612/1024 [MB] (306 MBps) [2024-11-19T06:36:14.135Z] Copying: 919/1024 [MB] (306 MBps) [2024-11-19T06:36:16.038Z] Copying: 1024/1024 [MB] (average 306 MBps) 00:12:24.109 00:12:24.109 06:36:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:24.109 06:36:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:24.109 06:36:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:24.109 06:36:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:24.109 06:36:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:24.109 06:36:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.109 { 00:12:24.109 "subsystems": [ 00:12:24.109 { 00:12:24.109 "subsystem": "bdev", 00:12:24.109 "config": [ 00:12:24.109 { 00:12:24.109 "params": { 00:12:24.109 "block_size": 512, 00:12:24.109 "num_blocks": 2097152, 00:12:24.109 "name": "malloc0" 00:12:24.109 }, 00:12:24.109 "method": "bdev_malloc_create" 00:12:24.109 }, 00:12:24.109 { 00:12:24.109 "params": { 00:12:24.109 "io_mechanism": "io_uring", 00:12:24.109 "filename": "/dev/nullb0", 00:12:24.109 "name": "null0" 00:12:24.109 }, 00:12:24.109 "method": "bdev_xnvme_create" 00:12:24.109 }, 00:12:24.109 { 00:12:24.109 "method": "bdev_wait_for_examine" 00:12:24.109 } 00:12:24.109 ] 00:12:24.109 } 00:12:24.109 ] 00:12:24.109 } 00:12:24.109 [2024-11-19 06:36:15.969663] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:24.109 [2024-11-19 06:36:15.970231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68814 ] 00:12:24.368 [2024-11-19 06:36:16.128822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.368 [2024-11-19 06:36:16.211054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.270  [2024-11-19T06:36:19.134Z] Copying: 311/1024 [MB] (311 MBps) [2024-11-19T06:36:20.076Z] Copying: 623/1024 [MB] (311 MBps) [2024-11-19T06:36:20.335Z] Copying: 935/1024 [MB] (311 MBps) [2024-11-19T06:36:22.238Z] Copying: 1024/1024 [MB] (average 311 MBps) 00:12:30.309 00:12:30.309 06:36:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:30.309 06:36:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:30.309 06:36:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:30.309 06:36:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:30.310 { 00:12:30.310 "subsystems": [ 00:12:30.310 { 00:12:30.310 "subsystem": "bdev", 00:12:30.310 "config": [ 00:12:30.310 { 00:12:30.310 "params": { 00:12:30.310 "block_size": 512, 00:12:30.310 "num_blocks": 2097152, 00:12:30.310 "name": "malloc0" 00:12:30.310 }, 00:12:30.310 "method": "bdev_malloc_create" 00:12:30.310 }, 00:12:30.310 { 00:12:30.310 "params": { 00:12:30.310 "io_mechanism": "io_uring", 00:12:30.310 "filename": "/dev/nullb0", 00:12:30.310 "name": "null0" 00:12:30.310 }, 00:12:30.310 "method": "bdev_xnvme_create" 00:12:30.310 }, 00:12:30.310 { 00:12:30.310 "method": "bdev_wait_for_examine" 00:12:30.310 } 00:12:30.310 ] 00:12:30.310 } 00:12:30.310 ] 00:12:30.310 } 00:12:30.310 [2024-11-19 06:36:22.170955] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:30.310 [2024-11-19 06:36:22.171068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68890 ] 00:12:30.569 [2024-11-19 06:36:22.328748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.569 [2024-11-19 06:36:22.412570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.470  [2024-11-19T06:36:25.334Z] Copying: 318/1024 [MB] (318 MBps) [2024-11-19T06:36:26.269Z] Copying: 637/1024 [MB] (319 MBps) [2024-11-19T06:36:26.544Z] Copying: 957/1024 [MB] (319 MBps) [2024-11-19T06:36:28.471Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:12:36.542 00:12:36.542 06:36:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:36.542 06:36:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:36.542 00:12:36.542 real 0m26.154s 00:12:36.542 user 0m23.004s 00:12:36.542 sys 0m2.634s 00:12:36.542 06:36:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.542 ************************************ 00:12:36.542 END TEST xnvme_to_malloc_dd_copy 00:12:36.542 06:36:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:36.542 ************************************ 00:12:36.542 06:36:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:36.542 06:36:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:36.542 06:36:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.542 06:36:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:36.542 ************************************ 00:12:36.542 START TEST xnvme_bdevperf 00:12:36.542 ************************************ 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:36.542 06:36:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:36.542 { 00:12:36.542 "subsystems": [ 00:12:36.542 { 00:12:36.542 "subsystem": "bdev", 00:12:36.542 "config": [ 00:12:36.542 { 00:12:36.542 "params": { 00:12:36.542 "io_mechanism": "libaio", 00:12:36.542 "filename": "/dev/nullb0", 00:12:36.542 "name": "null0" 00:12:36.543 }, 00:12:36.543 "method": "bdev_xnvme_create" 00:12:36.543 }, 00:12:36.543 { 00:12:36.543 "method": "bdev_wait_for_examine" 00:12:36.543 } 00:12:36.543 ] 00:12:36.543 } 00:12:36.543 ] 00:12:36.543 } 00:12:36.543 [2024-11-19 06:36:28.469165] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:36.543 [2024-11-19 06:36:28.469290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68991 ] 00:12:36.802 [2024-11-19 06:36:28.633875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.063 [2024-11-19 06:36:28.754515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.324 Running I/O for 5 seconds... 00:12:39.210 153600.00 IOPS, 600.00 MiB/s [2024-11-19T06:36:32.074Z] 157120.00 IOPS, 613.75 MiB/s [2024-11-19T06:36:33.449Z] 171968.00 IOPS, 671.75 MiB/s [2024-11-19T06:36:34.384Z] 179424.00 IOPS, 700.88 MiB/s 00:12:42.455 Latency(us) 00:12:42.455 [2024-11-19T06:36:34.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.455 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:42.455 null0 : 5.00 183868.07 718.23 0.00 0.00 345.50 108.70 2066.90 00:12:42.456 [2024-11-19T06:36:34.385Z] =================================================================================================================== 00:12:42.456 [2024-11-19T06:36:34.385Z] Total : 183868.07 718.23 0.00 0.00 345.50 108.70 2066.90 00:12:42.717 06:36:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:42.717 06:36:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:42.717 06:36:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:42.717 06:36:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:42.717 06:36:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:42.717 06:36:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:42.717 { 00:12:42.717 "subsystems": [ 00:12:42.717 { 00:12:42.717 "subsystem": "bdev", 00:12:42.717 "config": [ 00:12:42.717 { 00:12:42.717 "params": { 00:12:42.718 "io_mechanism": "io_uring", 00:12:42.718 "filename": "/dev/nullb0", 00:12:42.718 "name": "null0" 00:12:42.718 }, 00:12:42.718 "method": "bdev_xnvme_create" 00:12:42.718 }, 00:12:42.718 { 00:12:42.718 "method": "bdev_wait_for_examine" 00:12:42.718 } 00:12:42.718 ] 00:12:42.718 } 00:12:42.718 ] 00:12:42.718 } 00:12:42.979 [2024-11-19 06:36:34.671130] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:42.979 [2024-11-19 06:36:34.671242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69064 ] 00:12:42.979 [2024-11-19 06:36:34.825325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.241 [2024-11-19 06:36:34.942775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.503 Running I/O for 5 seconds... 00:12:45.381 175424.00 IOPS, 685.25 MiB/s [2024-11-19T06:36:38.242Z] 196000.00 IOPS, 765.62 MiB/s [2024-11-19T06:36:39.616Z] 207296.00 IOPS, 809.75 MiB/s [2024-11-19T06:36:40.550Z] 212912.00 IOPS, 831.69 MiB/s 00:12:48.621 Latency(us) 00:12:48.621 [2024-11-19T06:36:40.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.621 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:48.621 null0 : 5.00 216188.03 844.48 0.00 0.00 293.73 146.51 1978.68 00:12:48.621 [2024-11-19T06:36:40.550Z] =================================================================================================================== 00:12:48.621 [2024-11-19T06:36:40.550Z] Total : 216188.03 844.48 0.00 0.00 293.73 146.51 1978.68 00:12:48.880 06:36:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:48.880 06:36:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:48.880 00:12:48.880 real 0m12.421s 00:12:48.880 user 0m9.995s 00:12:48.880 sys 0m2.187s 00:12:48.880 06:36:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.880 ************************************ 00:12:48.880 END TEST xnvme_bdevperf 00:12:48.880 ************************************ 00:12:48.880 06:36:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:49.162 00:12:49.162 real 0m38.849s 00:12:49.162 user 0m33.117s 00:12:49.162 sys 0m4.940s 00:12:49.162 06:36:40 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.162 06:36:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.162 ************************************ 00:12:49.162 END TEST nvme_xnvme 00:12:49.162 ************************************ 00:12:49.162 06:36:40 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:49.162 06:36:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:49.162 06:36:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.162 06:36:40 -- common/autotest_common.sh@10 -- # set +x 00:12:49.162 ************************************ 00:12:49.162 START TEST blockdev_xnvme 00:12:49.162 ************************************ 00:12:49.162 06:36:40 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:49.162 * Looking for test storage... 00:12:49.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:49.162 06:36:40 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:49.162 06:36:40 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:49.162 06:36:40 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:49.162 06:36:40 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.162 06:36:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.162 06:36:41 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.162 --rc genhtml_branch_coverage=1 00:12:49.162 --rc genhtml_function_coverage=1 00:12:49.162 --rc genhtml_legend=1 00:12:49.162 --rc geninfo_all_blocks=1 00:12:49.162 --rc geninfo_unexecuted_blocks=1 00:12:49.162 00:12:49.162 ' 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.162 --rc genhtml_branch_coverage=1 00:12:49.162 --rc genhtml_function_coverage=1 00:12:49.162 --rc genhtml_legend=1 00:12:49.162 --rc geninfo_all_blocks=1 00:12:49.162 --rc geninfo_unexecuted_blocks=1 00:12:49.162 00:12:49.162 ' 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.162 --rc genhtml_branch_coverage=1 00:12:49.162 --rc genhtml_function_coverage=1 00:12:49.162 --rc genhtml_legend=1 00:12:49.162 --rc geninfo_all_blocks=1 00:12:49.162 --rc geninfo_unexecuted_blocks=1 00:12:49.162 00:12:49.162 ' 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.162 --rc genhtml_branch_coverage=1 00:12:49.162 --rc genhtml_function_coverage=1 00:12:49.162 --rc genhtml_legend=1 00:12:49.162 --rc geninfo_all_blocks=1 00:12:49.162 --rc geninfo_unexecuted_blocks=1 00:12:49.162 00:12:49.162 ' 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69202 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69202 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 69202 ']' 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.162 06:36:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.162 06:36:41 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:49.446 [2024-11-19 06:36:41.108587] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:49.446 [2024-11-19 06:36:41.108727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69202 ] 00:12:49.446 [2024-11-19 06:36:41.265660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.708 [2024-11-19 06:36:41.388335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.280 06:36:42 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.280 06:36:42 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:12:50.280 06:36:42 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:50.280 06:36:42 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:50.280 06:36:42 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:50.280 06:36:42 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:50.280 06:36:42 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:50.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:50.800 Waiting for block devices as requested 00:12:50.800 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.800 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.800 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.059 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.324 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:56.324 nvme0n1 00:12:56.324 nvme1n1 00:12:56.324 nvme2n1 00:12:56.324 nvme2n2 00:12:56.324 nvme2n3 00:12:56.324 nvme3n1 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.324 06:36:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.324 06:36:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:56.324 06:36:48 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.324 06:36:48 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:56.324 06:36:48 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:56.325 06:36:48 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e5ded51a-8e25-4313-ab5a-258e7a3f9247"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e5ded51a-8e25-4313-ab5a-258e7a3f9247",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "08ee0992-b1c4-4927-a9c5-228eca0495be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "08ee0992-b1c4-4927-a9c5-228eca0495be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "83894415-becf-432e-bb11-ce533dbac663"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "83894415-becf-432e-bb11-ce533dbac663",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "66ba5930-74b2-4913-ac3d-13c6068c6580"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66ba5930-74b2-4913-ac3d-13c6068c6580",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9bb0fb56-d4b3-437c-82e9-5490636fb33f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9bb0fb56-d4b3-437c-82e9-5490636fb33f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a3c3d36f-3008-4a1d-bdda-41e525f21294"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a3c3d36f-3008-4a1d-bdda-41e525f21294",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:56.325 06:36:48 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:56.325 06:36:48 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:56.325 06:36:48 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:56.325 06:36:48 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69202 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 69202 ']' 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 69202 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69202 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.325 killing process with pid 69202 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69202' 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 69202 00:12:56.325 06:36:48 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 69202 00:12:57.701 06:36:49 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:57.701 06:36:49 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:57.701 06:36:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:57.701 06:36:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.701 06:36:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.701 ************************************ 00:12:57.701 START TEST bdev_hello_world 00:12:57.701 ************************************ 00:12:57.701 06:36:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:57.701 [2024-11-19 06:36:49.301945] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:57.701 [2024-11-19 06:36:49.302061] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69566 ] 00:12:57.701 [2024-11-19 06:36:49.457339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.701 [2024-11-19 06:36:49.532012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.959 [2024-11-19 06:36:49.815108] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:57.959 [2024-11-19 06:36:49.815145] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:57.959 [2024-11-19 06:36:49.815157] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:57.959 [2024-11-19 06:36:49.816622] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:57.959 [2024-11-19 06:36:49.816935] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:57.959 [2024-11-19 06:36:49.816958] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:57.959 [2024-11-19 06:36:49.817166] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:57.959 00:12:57.959 [2024-11-19 06:36:49.817189] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:58.527 00:12:58.527 real 0m1.112s 00:12:58.527 user 0m0.851s 00:12:58.527 sys 0m0.150s 00:12:58.527 ************************************ 00:12:58.527 END TEST bdev_hello_world 00:12:58.527 ************************************ 00:12:58.527 06:36:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.527 06:36:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:58.527 06:36:50 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:58.527 06:36:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.527 06:36:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.527 06:36:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.527 ************************************ 00:12:58.527 START TEST bdev_bounds 00:12:58.527 ************************************ 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69597 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:58.527 Process bdevio pid: 69597 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69597' 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69597 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 69597 ']' 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.527 06:36:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:58.785 [2024-11-19 06:36:50.484743] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:58.785 [2024-11-19 06:36:50.485195] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69597 ] 00:12:58.785 [2024-11-19 06:36:50.640986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.043 [2024-11-19 06:36:50.720054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.043 [2024-11-19 06:36:50.720410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.043 [2024-11-19 06:36:50.720367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.610 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.610 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:59.610 06:36:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:59.610 I/O targets: 00:12:59.610 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:59.610 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:59.610 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:59.610 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:59.610 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:59.610 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:59.610 00:12:59.610 00:12:59.610 CUnit - A unit testing framework for C - Version 2.1-3 00:12:59.610 http://cunit.sourceforge.net/ 00:12:59.610 00:12:59.610 00:12:59.610 Suite: bdevio tests on: nvme3n1 00:12:59.610 Test: blockdev write read block ...passed 00:12:59.610 Test: blockdev write zeroes read block ...passed 00:12:59.610 Test: blockdev write zeroes read no split ...passed 00:12:59.610 Test: blockdev write zeroes read split ...passed 00:12:59.610 Test: blockdev write zeroes read split partial ...passed 00:12:59.610 Test: blockdev reset ...passed 00:12:59.610 Test: blockdev write read 8 blocks ...passed 00:12:59.610 Test: blockdev write read size > 128k ...passed 00:12:59.610 Test: blockdev write read invalid size ...passed 00:12:59.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.610 Test: blockdev write read max offset ...passed 00:12:59.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.610 Test: blockdev writev readv 8 blocks ...passed 00:12:59.610 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.610 Test: blockdev writev readv block ...passed 00:12:59.610 Test: blockdev writev readv size > 128k ...passed 00:12:59.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.610 Test: blockdev comparev and writev ...passed 00:12:59.610 Test: blockdev nvme passthru rw ...passed 00:12:59.610 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.610 Test: blockdev nvme admin passthru ...passed 00:12:59.610 Test: blockdev copy ...passed 00:12:59.610 Suite: bdevio tests on: nvme2n3 00:12:59.610 Test: blockdev write read block ...passed 00:12:59.610 Test: blockdev write zeroes read block ...passed 00:12:59.610 Test: blockdev write zeroes read no split ...passed 00:12:59.610 Test: blockdev write zeroes read split ...passed 00:12:59.610 Test: blockdev write zeroes read split partial ...passed 00:12:59.610 Test: blockdev reset ...passed 00:12:59.610 Test: blockdev write read 8 blocks ...passed 00:12:59.610 Test: blockdev write read size > 128k ...passed 00:12:59.610 Test: blockdev write read invalid size ...passed 00:12:59.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.610 Test: blockdev write read max offset ...passed 00:12:59.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.610 Test: blockdev writev readv 8 blocks ...passed 00:12:59.610 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.610 Test: blockdev writev readv block ...passed 00:12:59.610 Test: blockdev writev readv size > 128k ...passed 00:12:59.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.610 Test: blockdev comparev and writev ...passed 00:12:59.610 Test: blockdev nvme passthru rw ...passed 00:12:59.610 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.610 Test: blockdev nvme admin passthru ...passed 00:12:59.610 Test: blockdev copy ...passed 00:12:59.610 Suite: bdevio tests on: nvme2n2 00:12:59.610 Test: blockdev write read block ...passed 00:12:59.610 Test: blockdev write zeroes read block ...passed 00:12:59.610 Test: blockdev write zeroes read no split ...passed 00:12:59.610 Test: blockdev write zeroes read split ...passed 00:12:59.868 Test: blockdev write zeroes read split partial ...passed 00:12:59.868 Test: blockdev reset ...passed 00:12:59.868 Test: blockdev write read 8 blocks ...passed 00:12:59.868 Test: blockdev write read size > 128k ...passed 00:12:59.868 Test: blockdev write read invalid size ...passed 00:12:59.868 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.868 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.868 Test: blockdev write read max offset ...passed 00:12:59.868 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.868 Test: blockdev writev readv 8 blocks ...passed 00:12:59.868 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.868 Test: blockdev writev readv block ...passed 00:12:59.869 Test: blockdev writev readv size > 128k ...passed 00:12:59.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.869 Test: blockdev comparev and writev ...passed 00:12:59.869 Test: blockdev nvme passthru rw ...passed 00:12:59.869 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.869 Test: blockdev nvme admin passthru ...passed 00:12:59.869 Test: blockdev copy ...passed 00:12:59.869 Suite: bdevio tests on: nvme2n1 00:12:59.869 Test: blockdev write read block ...passed 00:12:59.869 Test: blockdev write zeroes read block ...passed 00:12:59.869 Test: blockdev write zeroes read no split ...passed 00:12:59.869 Test: blockdev write zeroes read split ...passed 00:12:59.869 Test: blockdev write zeroes read split partial ...passed 00:12:59.869 Test: blockdev reset ...passed 00:12:59.869 Test: blockdev write read 8 blocks ...passed 00:12:59.869 Test: blockdev write read size > 128k ...passed 00:12:59.869 Test: blockdev write read invalid size ...passed 00:12:59.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.869 Test: blockdev write read max offset ...passed 00:12:59.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.869 Test: blockdev writev readv 8 blocks ...passed 00:12:59.869 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.869 Test: blockdev writev readv block ...passed 00:12:59.869 Test: blockdev writev readv size > 128k ...passed 00:12:59.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.869 Test: blockdev comparev and writev ...passed 00:12:59.869 Test: blockdev nvme passthru rw ...passed 00:12:59.869 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.869 Test: blockdev nvme admin passthru ...passed 00:12:59.869 Test: blockdev copy ...passed 00:12:59.869 Suite: bdevio tests on: nvme1n1 00:12:59.869 Test: blockdev write read block ...passed 00:12:59.869 Test: blockdev write zeroes read block ...passed 00:12:59.869 Test: blockdev write zeroes read no split ...passed 00:12:59.869 Test: blockdev write zeroes read split ...passed 00:12:59.869 Test: blockdev write zeroes read split partial ...passed 00:12:59.869 Test: blockdev reset ...passed 00:12:59.869 Test: blockdev write read 8 blocks ...passed 00:12:59.869 Test: blockdev write read size > 128k ...passed 00:12:59.869 Test: blockdev write read invalid size ...passed 00:12:59.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.869 Test: blockdev write read max offset ...passed 00:12:59.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.869 Test: blockdev writev readv 8 blocks ...passed 00:12:59.869 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.869 Test: blockdev writev readv block ...passed 00:12:59.869 Test: blockdev writev readv size > 128k ...passed 00:12:59.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.869 Test: blockdev comparev and writev ...passed 00:12:59.869 Test: blockdev nvme passthru rw ...passed 00:12:59.869 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.869 Test: blockdev nvme admin passthru ...passed 00:12:59.869 Test: blockdev copy ...passed 00:12:59.869 Suite: bdevio tests on: nvme0n1 00:12:59.869 Test: blockdev write read block ...passed 00:12:59.869 Test: blockdev write zeroes read block ...passed 00:12:59.869 Test: blockdev write zeroes read no split ...passed 00:12:59.869 Test: blockdev write zeroes read split ...passed 00:12:59.869 Test: blockdev write zeroes read split partial ...passed 00:12:59.869 Test: blockdev reset ...passed 00:12:59.869 Test: blockdev write read 8 blocks ...passed 00:12:59.869 Test: blockdev write read size > 128k ...passed 00:12:59.869 Test: blockdev write read invalid size ...passed 00:12:59.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.869 Test: blockdev write read max offset ...passed 00:12:59.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.869 Test: blockdev writev readv 8 blocks ...passed 00:12:59.869 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.869 Test: blockdev writev readv block ...passed 00:12:59.869 Test: blockdev writev readv size > 128k ...passed 00:12:59.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.869 Test: blockdev comparev and writev ...passed 00:12:59.869 Test: blockdev nvme passthru rw ...passed 00:12:59.869 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.869 Test: blockdev nvme admin passthru ...passed 00:12:59.869 Test: blockdev copy ...passed 00:12:59.869 00:12:59.869 Run Summary: Type Total Ran Passed Failed Inactive 00:12:59.869 suites 6 6 n/a 0 0 00:12:59.869 tests 138 138 138 0 0 00:12:59.869 asserts 780 780 780 0 n/a 00:12:59.869 00:12:59.869 Elapsed time = 0.845 seconds 00:12:59.869 0 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69597 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 69597 ']' 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 69597 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69597 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.869 killing process with pid 69597 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69597' 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 69597 00:12:59.869 06:36:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 69597 00:13:00.437 06:36:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:00.437 00:13:00.437 real 0m1.899s 00:13:00.437 user 0m4.852s 00:13:00.437 sys 0m0.259s 00:13:00.437 06:36:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.437 ************************************ 00:13:00.437 END TEST bdev_bounds 00:13:00.437 ************************************ 00:13:00.437 06:36:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:00.697 06:36:52 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:00.697 06:36:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:00.697 06:36:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.697 06:36:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.697 ************************************ 00:13:00.697 START TEST bdev_nbd 00:13:00.697 ************************************ 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69654 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69654 /var/tmp/spdk-nbd.sock 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 69654 ']' 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:00.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.697 06:36:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:00.697 [2024-11-19 06:36:52.453832] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:00.697 [2024-11-19 06:36:52.454121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.697 [2024-11-19 06:36:52.612972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.956 [2024-11-19 06:36:52.690287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.523 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:01.781 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:01.781 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:01.781 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:01.781 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:01.781 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.781 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.782 1+0 records in 00:13:01.782 1+0 records out 00:13:01.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341708 s, 12.0 MB/s 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.782 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.043 1+0 records in 00:13:02.043 1+0 records out 00:13:02.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448661 s, 9.1 MB/s 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.043 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.304 1+0 records in 00:13:02.304 1+0 records out 00:13:02.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832737 s, 4.9 MB/s 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.304 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.305 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.305 06:36:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.305 1+0 records in 00:13:02.305 1+0 records out 00:13:02.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138765 s, 3.0 MB/s 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.305 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.565 1+0 records in 00:13:02.565 1+0 records out 00:13:02.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121743 s, 3.4 MB/s 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.565 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.826 1+0 records in 00:13:02.826 1+0 records out 00:13:02.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108861 s, 3.8 MB/s 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.826 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:03.088 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd0", 00:13:03.088 "bdev_name": "nvme0n1" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd1", 00:13:03.088 "bdev_name": "nvme1n1" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd2", 00:13:03.088 "bdev_name": "nvme2n1" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd3", 00:13:03.088 "bdev_name": "nvme2n2" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd4", 00:13:03.088 "bdev_name": "nvme2n3" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd5", 00:13:03.088 "bdev_name": "nvme3n1" 00:13:03.088 } 00:13:03.088 ]' 00:13:03.088 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:03.088 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd0", 00:13:03.088 "bdev_name": "nvme0n1" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd1", 00:13:03.088 "bdev_name": "nvme1n1" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd2", 00:13:03.088 "bdev_name": "nvme2n1" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd3", 00:13:03.088 "bdev_name": "nvme2n2" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd4", 00:13:03.088 "bdev_name": "nvme2n3" 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "nbd_device": "/dev/nbd5", 00:13:03.088 "bdev_name": "nvme3n1" 00:13:03.088 } 00:13:03.088 ]' 00:13:03.088 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:03.089 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:03.089 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.089 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:03.089 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.089 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:03.089 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.089 06:36:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.350 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.611 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.873 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.135 06:36:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.395 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.653 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:04.653 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:04.653 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:04.653 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:04.653 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:04.653 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:04.653 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:04.653 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.654 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:04.912 /dev/nbd0 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.912 1+0 records in 00:13:04.912 1+0 records out 00:13:04.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430022 s, 9.5 MB/s 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.912 06:36:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:05.170 /dev/nbd1 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.170 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.170 1+0 records in 00:13:05.170 1+0 records out 00:13:05.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471608 s, 8.7 MB/s 00:13:05.171 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.171 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.171 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.171 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.171 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.171 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.171 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.171 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:05.429 /dev/nbd10 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.429 1+0 records in 00:13:05.429 1+0 records out 00:13:05.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419983 s, 9.8 MB/s 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.429 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:05.713 /dev/nbd11 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.713 1+0 records in 00:13:05.713 1+0 records out 00:13:05.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479739 s, 8.5 MB/s 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.713 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:05.972 /dev/nbd12 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.972 1+0 records in 00:13:05.972 1+0 records out 00:13:05.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254811 s, 16.1 MB/s 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.972 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:05.972 /dev/nbd13 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.230 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.231 1+0 records in 00:13:06.231 1+0 records out 00:13:06.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386254 s, 10.6 MB/s 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.231 06:36:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.231 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd0", 00:13:06.231 "bdev_name": "nvme0n1" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd1", 00:13:06.231 "bdev_name": "nvme1n1" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd10", 00:13:06.231 "bdev_name": "nvme2n1" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd11", 00:13:06.231 "bdev_name": "nvme2n2" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd12", 00:13:06.231 "bdev_name": "nvme2n3" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd13", 00:13:06.231 "bdev_name": "nvme3n1" 00:13:06.231 } 00:13:06.231 ]' 00:13:06.231 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:06.231 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd0", 00:13:06.231 "bdev_name": "nvme0n1" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd1", 00:13:06.231 "bdev_name": "nvme1n1" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd10", 00:13:06.231 "bdev_name": "nvme2n1" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd11", 00:13:06.231 "bdev_name": "nvme2n2" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd12", 00:13:06.231 "bdev_name": "nvme2n3" 00:13:06.231 }, 00:13:06.231 { 00:13:06.231 "nbd_device": "/dev/nbd13", 00:13:06.231 "bdev_name": "nvme3n1" 00:13:06.231 } 00:13:06.231 ]' 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:06.491 /dev/nbd1 00:13:06.491 /dev/nbd10 00:13:06.491 /dev/nbd11 00:13:06.491 /dev/nbd12 00:13:06.491 /dev/nbd13' 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:06.491 /dev/nbd1 00:13:06.491 /dev/nbd10 00:13:06.491 /dev/nbd11 00:13:06.491 /dev/nbd12 00:13:06.491 /dev/nbd13' 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:06.491 256+0 records in 00:13:06.491 256+0 records out 00:13:06.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101354 s, 103 MB/s 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:06.491 256+0 records in 00:13:06.491 256+0 records out 00:13:06.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0641159 s, 16.4 MB/s 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:06.491 256+0 records in 00:13:06.491 256+0 records out 00:13:06.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147325 s, 7.1 MB/s 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.491 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:06.753 256+0 records in 00:13:06.753 256+0 records out 00:13:06.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218203 s, 4.8 MB/s 00:13:06.753 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.753 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:07.014 256+0 records in 00:13:07.014 256+0 records out 00:13:07.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155889 s, 6.7 MB/s 00:13:07.014 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.014 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:07.014 256+0 records in 00:13:07.014 256+0 records out 00:13:07.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105271 s, 10.0 MB/s 00:13:07.014 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.014 06:36:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:07.273 256+0 records in 00:13:07.273 256+0 records out 00:13:07.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228155 s, 4.6 MB/s 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:07.273 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.274 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.531 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.790 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.048 06:36:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:08.307 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:08.308 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:08.308 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:08.308 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.308 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.308 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:08.308 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.308 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.308 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.308 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.566 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:08.567 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.567 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.825 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:08.826 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:09.084 malloc_lvol_verify 00:13:09.084 06:37:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:09.342 64c12059-c2cc-4f72-84c8-69c2abaa64e5 00:13:09.342 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:09.601 41900082-dd1c-404e-a30a-8ec7a9536821 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:09.601 /dev/nbd0 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:09.601 mke2fs 1.47.0 (5-Feb-2023) 00:13:09.601 Discarding device blocks: 0/4096 done 00:13:09.601 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:09.601 00:13:09.601 Allocating group tables: 0/1 done 00:13:09.601 Writing inode tables: 0/1 done 00:13:09.601 Creating journal (1024 blocks): done 00:13:09.601 Writing superblocks and filesystem accounting information: 0/1 done 00:13:09.601 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.601 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69654 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 69654 ']' 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 69654 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69654 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.860 killing process with pid 69654 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69654' 00:13:09.860 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 69654 00:13:09.861 06:37:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 69654 00:13:10.429 06:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:10.429 00:13:10.429 real 0m9.948s 00:13:10.429 user 0m13.836s 00:13:10.429 sys 0m3.355s 00:13:10.429 06:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.429 06:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.429 ************************************ 00:13:10.429 END TEST bdev_nbd 00:13:10.429 ************************************ 00:13:10.691 06:37:02 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:10.691 06:37:02 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:10.691 06:37:02 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:10.691 06:37:02 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:10.691 06:37:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.691 06:37:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.691 06:37:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 ************************************ 00:13:10.691 START TEST bdev_fio 00:13:10.691 ************************************ 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:10.691 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:10.691 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:10.692 ************************************ 00:13:10.692 START TEST bdev_fio_rw_verify 00:13:10.692 ************************************ 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:10.692 06:37:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.952 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.952 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.952 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.952 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.952 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.952 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.952 fio-3.35 00:13:10.952 Starting 6 threads 00:13:23.200 00:13:23.200 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70049: Tue Nov 19 06:37:13 2024 00:13:23.200 read: IOPS=27.1k, BW=106MiB/s (111MB/s)(1057MiB/10002msec) 00:13:23.200 slat (usec): min=2, max=2562, avg= 5.04, stdev=11.65 00:13:23.200 clat (usec): min=81, max=10028, avg=663.10, stdev=607.71 00:13:23.200 lat (usec): min=84, max=10043, avg=668.13, stdev=608.27 00:13:23.200 clat percentiles (usec): 00:13:23.200 | 50.000th=[ 441], 99.000th=[ 2900], 99.900th=[ 4228], 99.990th=[ 5407], 00:13:23.200 | 99.999th=[10028] 00:13:23.200 write: IOPS=27.3k, BW=107MiB/s (112MB/s)(1068MiB/10002msec); 0 zone resets 00:13:23.200 slat (usec): min=10, max=4065, avg=29.16, stdev=99.40 00:13:23.200 clat (usec): min=63, max=7069, avg=848.97, stdev=724.76 00:13:23.200 lat (usec): min=77, max=7259, avg=878.13, stdev=739.53 00:13:23.200 clat percentiles (usec): 00:13:23.200 | 50.000th=[ 553], 99.000th=[ 3392], 99.900th=[ 4817], 99.990th=[ 6194], 00:13:23.200 | 99.999th=[ 7046] 00:13:23.200 bw ( KiB/s): min=57087, max=195856, per=100.00%, avg=110070.63, stdev=6802.62, samples=114 00:13:23.200 iops : min=14271, max=48964, avg=27517.00, stdev=1700.68, samples=114 00:13:23.200 lat (usec) : 100=0.06%, 250=13.21%, 500=37.10%, 750=18.22%, 1000=7.82% 00:13:23.200 lat (msec) : 2=16.96%, 4=6.39%, 10=0.25%, 20=0.01% 00:13:23.200 cpu : usr=45.16%, sys=31.28%, ctx=7956, majf=0, minf=23326 00:13:23.200 IO depths : 1=11.9%, 2=24.3%, 4=50.6%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.200 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.200 issued rwts: total=270602,273366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.200 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:23.200 00:13:23.200 Run status group 0 (all jobs): 00:13:23.200 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1057MiB (1108MB), run=10002-10002msec 00:13:23.200 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1068MiB (1120MB), run=10002-10002msec 00:13:23.200 ----------------------------------------------------- 00:13:23.200 Suppressions used: 00:13:23.200 count bytes template 00:13:23.200 6 48 /usr/src/fio/parse.c 00:13:23.200 2572 246912 /usr/src/fio/iolog.c 00:13:23.200 1 8 libtcmalloc_minimal.so 00:13:23.200 1 904 libcrypto.so 00:13:23.200 ----------------------------------------------------- 00:13:23.200 00:13:23.200 00:13:23.200 real 0m11.880s 00:13:23.200 user 0m28.531s 00:13:23.200 sys 0m19.087s 00:13:23.200 ************************************ 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:23.200 END TEST bdev_fio_rw_verify 00:13:23.200 ************************************ 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e5ded51a-8e25-4313-ab5a-258e7a3f9247"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e5ded51a-8e25-4313-ab5a-258e7a3f9247",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "08ee0992-b1c4-4927-a9c5-228eca0495be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "08ee0992-b1c4-4927-a9c5-228eca0495be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "83894415-becf-432e-bb11-ce533dbac663"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "83894415-becf-432e-bb11-ce533dbac663",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "66ba5930-74b2-4913-ac3d-13c6068c6580"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66ba5930-74b2-4913-ac3d-13c6068c6580",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9bb0fb56-d4b3-437c-82e9-5490636fb33f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9bb0fb56-d4b3-437c-82e9-5490636fb33f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a3c3d36f-3008-4a1d-bdda-41e525f21294"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a3c3d36f-3008-4a1d-bdda-41e525f21294",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:23.200 /home/vagrant/spdk_repo/spdk 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:23.200 06:37:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:23.200 00:13:23.201 real 0m12.048s 00:13:23.201 user 0m28.610s 00:13:23.201 sys 0m19.157s 00:13:23.201 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.201 06:37:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:23.201 ************************************ 00:13:23.201 END TEST bdev_fio 00:13:23.201 ************************************ 00:13:23.201 06:37:14 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:23.201 06:37:14 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:23.201 06:37:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:23.201 06:37:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.201 06:37:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.201 ************************************ 00:13:23.201 START TEST bdev_verify 00:13:23.201 ************************************ 00:13:23.201 06:37:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:23.201 [2024-11-19 06:37:14.576233] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:23.201 [2024-11-19 06:37:14.576369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70219 ] 00:13:23.201 [2024-11-19 06:37:14.738720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:23.201 [2024-11-19 06:37:14.863080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.201 [2024-11-19 06:37:14.863254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.471 Running I/O for 5 seconds... 00:13:25.793 23712.00 IOPS, 92.62 MiB/s [2024-11-19T06:37:18.664Z] 23568.00 IOPS, 92.06 MiB/s [2024-11-19T06:37:19.607Z] 23200.00 IOPS, 90.62 MiB/s [2024-11-19T06:37:20.551Z] 23176.00 IOPS, 90.53 MiB/s [2024-11-19T06:37:20.551Z] 22899.20 IOPS, 89.45 MiB/s 00:13:28.622 Latency(us) 00:13:28.622 [2024-11-19T06:37:20.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.622 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x0 length 0xa0000 00:13:28.622 nvme0n1 : 5.05 1621.52 6.33 0.00 0.00 78806.30 18047.61 94371.84 00:13:28.622 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0xa0000 length 0xa0000 00:13:28.622 nvme0n1 : 5.08 1487.01 5.81 0.00 0.00 85933.56 8015.56 101631.21 00:13:28.622 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x0 length 0xbd0bd 00:13:28.622 nvme1n1 : 5.05 2362.08 9.23 0.00 0.00 53946.73 6704.84 62914.56 00:13:28.622 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:28.622 nvme1n1 : 5.07 2404.38 9.39 0.00 0.00 52935.58 6553.60 59284.87 00:13:28.622 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x0 length 0x80000 00:13:28.622 nvme2n1 : 5.06 1870.70 7.31 0.00 0.00 67954.56 8418.86 78643.20 00:13:28.622 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x80000 length 0x80000 00:13:28.622 nvme2n1 : 5.07 1866.63 7.29 0.00 0.00 68245.60 11746.07 62914.56 00:13:28.622 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x0 length 0x80000 00:13:28.622 nvme2n2 : 5.06 1822.16 7.12 0.00 0.00 69615.70 10183.29 78643.20 00:13:28.622 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x80000 length 0x80000 00:13:28.622 nvme2n2 : 5.06 1845.72 7.21 0.00 0.00 68889.08 15022.87 69770.63 00:13:28.622 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x0 length 0x80000 00:13:28.622 nvme2n3 : 5.07 1817.70 7.10 0.00 0.00 69666.39 10233.70 73803.62 00:13:28.622 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x80000 length 0x80000 00:13:28.622 nvme2n3 : 5.08 1838.49 7.18 0.00 0.00 69045.85 8973.39 77030.01 00:13:28.622 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x0 length 0x20000 00:13:28.622 nvme3n1 : 5.07 1842.10 7.20 0.00 0.00 68684.93 6024.27 74206.92 00:13:28.622 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.622 Verification LBA range: start 0x20000 length 0x20000 00:13:28.622 nvme3n1 : 5.08 1840.42 7.19 0.00 0.00 68839.52 5016.02 69770.63 00:13:28.622 [2024-11-19T06:37:20.551Z] =================================================================================================================== 00:13:28.622 [2024-11-19T06:37:20.551Z] Total : 22618.91 88.36 0.00 0.00 67450.57 5016.02 101631.21 00:13:29.566 00:13:29.566 real 0m6.731s 00:13:29.566 user 0m10.873s 00:13:29.566 sys 0m1.450s 00:13:29.566 06:37:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.566 ************************************ 00:13:29.566 END TEST bdev_verify 00:13:29.566 ************************************ 00:13:29.566 06:37:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:29.566 06:37:21 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:29.566 06:37:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:29.566 06:37:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.566 06:37:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.566 ************************************ 00:13:29.566 START TEST bdev_verify_big_io 00:13:29.566 ************************************ 00:13:29.566 06:37:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:29.566 [2024-11-19 06:37:21.381232] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:29.566 [2024-11-19 06:37:21.381371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70323 ] 00:13:29.827 [2024-11-19 06:37:21.546671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.827 [2024-11-19 06:37:21.668918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.827 [2024-11-19 06:37:21.668959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.399 Running I/O for 5 seconds... 00:13:35.618 1168.00 IOPS, 73.00 MiB/s [2024-11-19T06:37:28.119Z] 2681.00 IOPS, 167.56 MiB/s [2024-11-19T06:37:28.379Z] 3245.00 IOPS, 202.81 MiB/s 00:13:36.450 Latency(us) 00:13:36.450 [2024-11-19T06:37:28.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.450 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0x0 length 0xa000 00:13:36.450 nvme0n1 : 5.77 166.39 10.40 0.00 0.00 732275.74 79853.10 1025991.29 00:13:36.450 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0xa000 length 0xa000 00:13:36.450 nvme0n1 : 5.80 125.42 7.84 0.00 0.00 984476.75 35288.62 942105.21 00:13:36.450 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0x0 length 0xbd0b 00:13:36.450 nvme1n1 : 5.83 137.18 8.57 0.00 0.00 873846.50 13913.80 1948738.17 00:13:36.450 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:36.450 nvme1n1 : 5.76 131.02 8.19 0.00 0.00 918059.80 46379.32 1858399.31 00:13:36.450 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0x0 length 0x8000 00:13:36.450 nvme2n1 : 5.84 120.58 7.54 0.00 0.00 963984.15 166965.56 1819682.66 00:13:36.450 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0x8000 length 0x8000 00:13:36.450 nvme2n1 : 5.82 140.24 8.77 0.00 0.00 852037.59 11897.30 942105.21 00:13:36.450 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0x0 length 0x8000 00:13:36.450 nvme2n2 : 5.84 140.21 8.76 0.00 0.00 812734.05 36498.51 1477685.56 00:13:36.450 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0x8000 length 0x8000 00:13:36.450 nvme2n2 : 5.81 136.36 8.52 0.00 0.00 853536.86 8318.03 2051982.57 00:13:36.450 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0x0 length 0x8000 00:13:36.450 nvme2n3 : 5.84 128.88 8.05 0.00 0.00 853712.72 27827.59 1574477.19 00:13:36.450 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.450 Verification LBA range: start 0x8000 length 0x8000 00:13:36.451 nvme2n3 : 5.81 140.37 8.77 0.00 0.00 803169.20 11544.42 858219.13 00:13:36.451 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.451 Verification LBA range: start 0x0 length 0x2000 00:13:36.451 nvme3n1 : 5.91 151.83 9.49 0.00 0.00 704919.32 7763.50 1613193.85 00:13:36.451 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.451 Verification LBA range: start 0x2000 length 0x2000 00:13:36.451 nvme3n1 : 5.82 165.07 10.32 0.00 0.00 659951.98 11947.72 896935.78 00:13:36.451 [2024-11-19T06:37:28.380Z] =================================================================================================================== 00:13:36.451 [2024-11-19T06:37:28.380Z] Total : 1683.54 105.22 0.00 0.00 825663.70 7763.50 2051982.57 00:13:37.403 00:13:37.403 real 0m7.797s 00:13:37.403 user 0m14.168s 00:13:37.403 sys 0m0.493s 00:13:37.404 06:37:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.404 ************************************ 00:13:37.404 END TEST bdev_verify_big_io 00:13:37.404 ************************************ 00:13:37.404 06:37:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.404 06:37:29 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.404 06:37:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:37.404 06:37:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.404 06:37:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.404 ************************************ 00:13:37.404 START TEST bdev_write_zeroes 00:13:37.404 ************************************ 00:13:37.404 06:37:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.404 [2024-11-19 06:37:29.244341] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:37.404 [2024-11-19 06:37:29.244490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70433 ] 00:13:37.671 [2024-11-19 06:37:29.411829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.671 [2024-11-19 06:37:29.532011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.246 Running I/O for 1 seconds... 00:13:39.190 76096.00 IOPS, 297.25 MiB/s 00:13:39.190 Latency(us) 00:13:39.190 [2024-11-19T06:37:31.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.190 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:39.190 nvme0n1 : 1.02 12554.99 49.04 0.00 0.00 10185.49 5419.32 18652.55 00:13:39.190 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:39.190 nvme1n1 : 1.02 13294.66 51.93 0.00 0.00 9606.82 3831.34 17442.66 00:13:39.190 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:39.190 nvme2n1 : 1.02 12540.33 48.99 0.00 0.00 10111.75 4587.52 18450.90 00:13:39.190 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:39.190 nvme2n2 : 1.02 12526.19 48.93 0.00 0.00 10114.88 4663.14 18350.08 00:13:39.190 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:39.190 nvme2n3 : 1.02 12458.16 48.66 0.00 0.00 10161.00 4763.96 18450.90 00:13:39.190 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:39.190 nvme3n1 : 1.02 12443.55 48.61 0.00 0.00 10165.93 4940.41 18551.73 00:13:39.190 [2024-11-19T06:37:31.119Z] =================================================================================================================== 00:13:39.190 [2024-11-19T06:37:31.119Z] Total : 75817.87 296.16 0.00 0.00 10053.18 3831.34 18652.55 00:13:40.196 00:13:40.196 real 0m2.604s 00:13:40.196 user 0m1.909s 00:13:40.196 sys 0m0.491s 00:13:40.196 06:37:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.196 06:37:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:40.196 ************************************ 00:13:40.196 END TEST bdev_write_zeroes 00:13:40.196 ************************************ 00:13:40.196 06:37:31 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:40.196 06:37:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:40.196 06:37:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.196 06:37:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.196 ************************************ 00:13:40.196 START TEST bdev_json_nonenclosed 00:13:40.196 ************************************ 00:13:40.196 06:37:31 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:40.196 [2024-11-19 06:37:31.918917] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:40.196 [2024-11-19 06:37:31.919069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70478 ] 00:13:40.196 [2024-11-19 06:37:32.082891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.458 [2024-11-19 06:37:32.206148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.458 [2024-11-19 06:37:32.206253] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:40.458 [2024-11-19 06:37:32.206273] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:40.458 [2024-11-19 06:37:32.206283] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:40.719 00:13:40.719 real 0m0.554s 00:13:40.719 user 0m0.336s 00:13:40.719 sys 0m0.112s 00:13:40.719 06:37:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.719 ************************************ 00:13:40.719 END TEST bdev_json_nonenclosed 00:13:40.719 ************************************ 00:13:40.719 06:37:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:40.719 06:37:32 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:40.719 06:37:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:40.719 06:37:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.719 06:37:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.719 ************************************ 00:13:40.719 START TEST bdev_json_nonarray 00:13:40.719 ************************************ 00:13:40.719 06:37:32 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:40.719 [2024-11-19 06:37:32.534614] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:40.719 [2024-11-19 06:37:32.534756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70509 ] 00:13:40.980 [2024-11-19 06:37:32.698841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.980 [2024-11-19 06:37:32.817815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.980 [2024-11-19 06:37:32.817948] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:40.980 [2024-11-19 06:37:32.817970] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:40.980 [2024-11-19 06:37:32.817980] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:41.241 00:13:41.241 real 0m0.546s 00:13:41.241 user 0m0.329s 00:13:41.241 sys 0m0.112s 00:13:41.241 06:37:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.241 06:37:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:41.241 ************************************ 00:13:41.241 END TEST bdev_json_nonarray 00:13:41.241 ************************************ 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:41.241 06:37:33 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:41.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:59.937 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.060 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.060 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.060 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.060 00:14:08.060 real 1m18.412s 00:14:08.060 user 1m25.342s 00:14:08.060 sys 1m20.807s 00:14:08.060 06:37:59 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.060 ************************************ 00:14:08.060 END TEST blockdev_xnvme 00:14:08.060 ************************************ 00:14:08.060 06:37:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.060 06:37:59 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:08.060 06:37:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.060 06:37:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.060 06:37:59 -- common/autotest_common.sh@10 -- # set +x 00:14:08.060 ************************************ 00:14:08.060 START TEST ublk 00:14:08.060 ************************************ 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:08.060 * Looking for test storage... 00:14:08.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:08.060 06:37:59 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.060 06:37:59 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.060 06:37:59 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.060 06:37:59 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.060 06:37:59 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.060 06:37:59 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.060 06:37:59 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.060 06:37:59 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.060 06:37:59 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.060 06:37:59 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.060 06:37:59 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.060 06:37:59 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:08.060 06:37:59 ublk -- scripts/common.sh@345 -- # : 1 00:14:08.060 06:37:59 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.060 06:37:59 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.060 06:37:59 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:08.060 06:37:59 ublk -- scripts/common.sh@353 -- # local d=1 00:14:08.060 06:37:59 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.060 06:37:59 ublk -- scripts/common.sh@355 -- # echo 1 00:14:08.060 06:37:59 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.060 06:37:59 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:08.060 06:37:59 ublk -- scripts/common.sh@353 -- # local d=2 00:14:08.060 06:37:59 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.060 06:37:59 ublk -- scripts/common.sh@355 -- # echo 2 00:14:08.060 06:37:59 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.060 06:37:59 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.060 06:37:59 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.060 06:37:59 ublk -- scripts/common.sh@368 -- # return 0 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:08.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.060 --rc genhtml_branch_coverage=1 00:14:08.060 --rc genhtml_function_coverage=1 00:14:08.060 --rc genhtml_legend=1 00:14:08.060 --rc geninfo_all_blocks=1 00:14:08.060 --rc geninfo_unexecuted_blocks=1 00:14:08.060 00:14:08.060 ' 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:08.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.060 --rc genhtml_branch_coverage=1 00:14:08.060 --rc genhtml_function_coverage=1 00:14:08.060 --rc genhtml_legend=1 00:14:08.060 --rc geninfo_all_blocks=1 00:14:08.060 --rc geninfo_unexecuted_blocks=1 00:14:08.060 00:14:08.060 ' 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:08.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.060 --rc genhtml_branch_coverage=1 00:14:08.060 --rc genhtml_function_coverage=1 00:14:08.060 --rc genhtml_legend=1 00:14:08.060 --rc geninfo_all_blocks=1 00:14:08.060 --rc geninfo_unexecuted_blocks=1 00:14:08.060 00:14:08.060 ' 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:08.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.060 --rc genhtml_branch_coverage=1 00:14:08.060 --rc genhtml_function_coverage=1 00:14:08.060 --rc genhtml_legend=1 00:14:08.060 --rc geninfo_all_blocks=1 00:14:08.060 --rc geninfo_unexecuted_blocks=1 00:14:08.060 00:14:08.060 ' 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:08.060 06:37:59 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:08.060 06:37:59 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:08.060 06:37:59 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:08.060 06:37:59 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:08.060 06:37:59 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:08.060 06:37:59 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:08.060 06:37:59 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:08.060 06:37:59 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:08.060 06:37:59 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.060 06:37:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.060 ************************************ 00:14:08.060 START TEST test_save_ublk_config 00:14:08.060 ************************************ 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:08.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70801 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70801 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70801 ']' 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.060 06:37:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:08.060 [2024-11-19 06:37:59.571313] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:08.060 [2024-11-19 06:37:59.571423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70801 ] 00:14:08.060 [2024-11-19 06:37:59.730371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.060 [2024-11-19 06:37:59.851850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.628 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.628 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:08.628 06:38:00 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:08.628 06:38:00 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:08.628 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.628 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:08.888 [2024-11-19 06:38:00.559950] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:08.888 [2024-11-19 06:38:00.560855] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:08.888 malloc0 00:14:08.888 [2024-11-19 06:38:00.632091] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:08.888 [2024-11-19 06:38:00.632184] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:08.888 [2024-11-19 06:38:00.632195] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:08.888 [2024-11-19 06:38:00.632203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.888 [2024-11-19 06:38:00.641063] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.888 [2024-11-19 06:38:00.641097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.888 [2024-11-19 06:38:00.647962] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.888 [2024-11-19 06:38:00.648089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:08.888 [2024-11-19 06:38:00.664952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.888 0 00:14:08.888 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.888 06:38:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:08.888 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.888 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:09.151 "subsystems": [ 00:14:09.151 { 00:14:09.151 "subsystem": "fsdev", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "method": "fsdev_set_opts", 00:14:09.151 "params": { 00:14:09.151 "fsdev_io_pool_size": 65535, 00:14:09.151 "fsdev_io_cache_size": 256 00:14:09.151 } 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "keyring", 00:14:09.151 "config": [] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "iobuf", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "method": "iobuf_set_options", 00:14:09.151 "params": { 00:14:09.151 "small_pool_count": 8192, 00:14:09.151 "large_pool_count": 1024, 00:14:09.151 "small_bufsize": 8192, 00:14:09.151 "large_bufsize": 135168, 00:14:09.151 "enable_numa": false 00:14:09.151 } 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "sock", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "method": "sock_set_default_impl", 00:14:09.151 "params": { 00:14:09.151 "impl_name": "posix" 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "sock_impl_set_options", 00:14:09.151 "params": { 00:14:09.151 "impl_name": "ssl", 00:14:09.151 "recv_buf_size": 4096, 00:14:09.151 "send_buf_size": 4096, 00:14:09.151 "enable_recv_pipe": true, 00:14:09.151 "enable_quickack": false, 00:14:09.151 "enable_placement_id": 0, 00:14:09.151 "enable_zerocopy_send_server": true, 00:14:09.151 "enable_zerocopy_send_client": false, 00:14:09.151 "zerocopy_threshold": 0, 00:14:09.151 "tls_version": 0, 00:14:09.151 "enable_ktls": false 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "sock_impl_set_options", 00:14:09.151 "params": { 00:14:09.151 "impl_name": "posix", 00:14:09.151 "recv_buf_size": 2097152, 00:14:09.151 "send_buf_size": 2097152, 00:14:09.151 "enable_recv_pipe": true, 00:14:09.151 "enable_quickack": false, 00:14:09.151 "enable_placement_id": 0, 00:14:09.151 "enable_zerocopy_send_server": true, 00:14:09.151 "enable_zerocopy_send_client": false, 00:14:09.151 "zerocopy_threshold": 0, 00:14:09.151 "tls_version": 0, 00:14:09.151 "enable_ktls": false 00:14:09.151 } 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "vmd", 00:14:09.151 "config": [] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "accel", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "method": "accel_set_options", 00:14:09.151 "params": { 00:14:09.151 "small_cache_size": 128, 00:14:09.151 "large_cache_size": 16, 00:14:09.151 "task_count": 2048, 00:14:09.151 "sequence_count": 2048, 00:14:09.151 "buf_count": 2048 00:14:09.151 } 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "bdev", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "method": "bdev_set_options", 00:14:09.151 "params": { 00:14:09.151 "bdev_io_pool_size": 65535, 00:14:09.151 "bdev_io_cache_size": 256, 00:14:09.151 "bdev_auto_examine": true, 00:14:09.151 "iobuf_small_cache_size": 128, 00:14:09.151 "iobuf_large_cache_size": 16 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "bdev_raid_set_options", 00:14:09.151 "params": { 00:14:09.151 "process_window_size_kb": 1024, 00:14:09.151 "process_max_bandwidth_mb_sec": 0 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "bdev_iscsi_set_options", 00:14:09.151 "params": { 00:14:09.151 "timeout_sec": 30 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "bdev_nvme_set_options", 00:14:09.151 "params": { 00:14:09.151 "action_on_timeout": "none", 00:14:09.151 "timeout_us": 0, 00:14:09.151 "timeout_admin_us": 0, 00:14:09.151 "keep_alive_timeout_ms": 10000, 00:14:09.151 "arbitration_burst": 0, 00:14:09.151 "low_priority_weight": 0, 00:14:09.151 "medium_priority_weight": 0, 00:14:09.151 "high_priority_weight": 0, 00:14:09.151 "nvme_adminq_poll_period_us": 10000, 00:14:09.151 "nvme_ioq_poll_period_us": 0, 00:14:09.151 "io_queue_requests": 0, 00:14:09.151 "delay_cmd_submit": true, 00:14:09.151 "transport_retry_count": 4, 00:14:09.151 "bdev_retry_count": 3, 00:14:09.151 "transport_ack_timeout": 0, 00:14:09.151 "ctrlr_loss_timeout_sec": 0, 00:14:09.151 "reconnect_delay_sec": 0, 00:14:09.151 "fast_io_fail_timeout_sec": 0, 00:14:09.151 "disable_auto_failback": false, 00:14:09.151 "generate_uuids": false, 00:14:09.151 "transport_tos": 0, 00:14:09.151 "nvme_error_stat": false, 00:14:09.151 "rdma_srq_size": 0, 00:14:09.151 "io_path_stat": false, 00:14:09.151 "allow_accel_sequence": false, 00:14:09.151 "rdma_max_cq_size": 0, 00:14:09.151 "rdma_cm_event_timeout_ms": 0, 00:14:09.151 "dhchap_digests": [ 00:14:09.151 "sha256", 00:14:09.151 "sha384", 00:14:09.151 "sha512" 00:14:09.151 ], 00:14:09.151 "dhchap_dhgroups": [ 00:14:09.151 "null", 00:14:09.151 "ffdhe2048", 00:14:09.151 "ffdhe3072", 00:14:09.151 "ffdhe4096", 00:14:09.151 "ffdhe6144", 00:14:09.151 "ffdhe8192" 00:14:09.151 ] 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "bdev_nvme_set_hotplug", 00:14:09.151 "params": { 00:14:09.151 "period_us": 100000, 00:14:09.151 "enable": false 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "bdev_malloc_create", 00:14:09.151 "params": { 00:14:09.151 "name": "malloc0", 00:14:09.151 "num_blocks": 8192, 00:14:09.151 "block_size": 4096, 00:14:09.151 "physical_block_size": 4096, 00:14:09.151 "uuid": "31becea2-65c1-4875-8218-d987e2f21bae", 00:14:09.151 "optimal_io_boundary": 0, 00:14:09.151 "md_size": 0, 00:14:09.151 "dif_type": 0, 00:14:09.151 "dif_is_head_of_md": false, 00:14:09.151 "dif_pi_format": 0 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "bdev_wait_for_examine" 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "scsi", 00:14:09.151 "config": null 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "scheduler", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "method": "framework_set_scheduler", 00:14:09.151 "params": { 00:14:09.151 "name": "static" 00:14:09.151 } 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "vhost_scsi", 00:14:09.151 "config": [] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "vhost_blk", 00:14:09.151 "config": [] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "ublk", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "method": "ublk_create_target", 00:14:09.151 "params": { 00:14:09.151 "cpumask": "1" 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "ublk_start_disk", 00:14:09.151 "params": { 00:14:09.151 "bdev_name": "malloc0", 00:14:09.151 "ublk_id": 0, 00:14:09.151 "num_queues": 1, 00:14:09.151 "queue_depth": 128 00:14:09.151 } 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "nbd", 00:14:09.151 "config": [] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "nvmf", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "method": "nvmf_set_config", 00:14:09.151 "params": { 00:14:09.151 "discovery_filter": "match_any", 00:14:09.151 "admin_cmd_passthru": { 00:14:09.151 "identify_ctrlr": false 00:14:09.151 }, 00:14:09.151 "dhchap_digests": [ 00:14:09.151 "sha256", 00:14:09.151 "sha384", 00:14:09.151 "sha512" 00:14:09.151 ], 00:14:09.151 "dhchap_dhgroups": [ 00:14:09.151 "null", 00:14:09.151 "ffdhe2048", 00:14:09.151 "ffdhe3072", 00:14:09.151 "ffdhe4096", 00:14:09.151 "ffdhe6144", 00:14:09.151 "ffdhe8192" 00:14:09.151 ] 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "nvmf_set_max_subsystems", 00:14:09.151 "params": { 00:14:09.151 "max_subsystems": 1024 00:14:09.151 } 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "nvmf_set_crdt", 00:14:09.151 "params": { 00:14:09.151 "crdt1": 0, 00:14:09.151 "crdt2": 0, 00:14:09.151 "crdt3": 0 00:14:09.151 } 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "subsystem": "iscsi", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "method": "iscsi_set_options", 00:14:09.151 "params": { 00:14:09.151 "node_base": "iqn.2016-06.io.spdk", 00:14:09.151 "max_sessions": 128, 00:14:09.151 "max_connections_per_session": 2, 00:14:09.151 "max_queue_depth": 64, 00:14:09.151 "default_time2wait": 2, 00:14:09.151 "default_time2retain": 20, 00:14:09.151 "first_burst_length": 8192, 00:14:09.151 "immediate_data": true, 00:14:09.151 "allow_duplicated_isid": false, 00:14:09.151 "error_recovery_level": 0, 00:14:09.151 "nop_timeout": 60, 00:14:09.151 "nop_in_interval": 30, 00:14:09.151 "disable_chap": false, 00:14:09.151 "require_chap": false, 00:14:09.151 "mutual_chap": false, 00:14:09.151 "chap_group": 0, 00:14:09.151 "max_large_datain_per_connection": 64, 00:14:09.151 "max_r2t_per_connection": 4, 00:14:09.151 "pdu_pool_size": 36864, 00:14:09.151 "immediate_data_pool_size": 16384, 00:14:09.151 "data_out_pool_size": 2048 00:14:09.151 } 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 }' 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70801 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70801 ']' 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70801 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70801 00:14:09.151 killing process with pid 70801 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70801' 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70801 00:14:09.151 06:38:00 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70801 00:14:10.537 [2024-11-19 06:38:02.096565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:10.537 [2024-11-19 06:38:02.124074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:10.537 [2024-11-19 06:38:02.124220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:10.537 [2024-11-19 06:38:02.132973] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:10.537 [2024-11-19 06:38:02.133041] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:10.537 [2024-11-19 06:38:02.133065] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:10.537 [2024-11-19 06:38:02.133090] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:10.537 [2024-11-19 06:38:02.133249] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70856 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70856 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70856 ']' 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:11.921 06:38:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:11.921 "subsystems": [ 00:14:11.921 { 00:14:11.921 "subsystem": "fsdev", 00:14:11.921 "config": [ 00:14:11.921 { 00:14:11.921 "method": "fsdev_set_opts", 00:14:11.921 "params": { 00:14:11.921 "fsdev_io_pool_size": 65535, 00:14:11.921 "fsdev_io_cache_size": 256 00:14:11.921 } 00:14:11.921 } 00:14:11.921 ] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "keyring", 00:14:11.921 "config": [] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "iobuf", 00:14:11.921 "config": [ 00:14:11.921 { 00:14:11.921 "method": "iobuf_set_options", 00:14:11.921 "params": { 00:14:11.921 "small_pool_count": 8192, 00:14:11.921 "large_pool_count": 1024, 00:14:11.921 "small_bufsize": 8192, 00:14:11.921 "large_bufsize": 135168, 00:14:11.921 "enable_numa": false 00:14:11.921 } 00:14:11.921 } 00:14:11.921 ] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "sock", 00:14:11.921 "config": [ 00:14:11.921 { 00:14:11.921 "method": "sock_set_default_impl", 00:14:11.921 "params": { 00:14:11.921 "impl_name": "posix" 00:14:11.921 } 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "method": "sock_impl_set_options", 00:14:11.921 "params": { 00:14:11.921 "impl_name": "ssl", 00:14:11.921 "recv_buf_size": 4096, 00:14:11.921 "send_buf_size": 4096, 00:14:11.921 "enable_recv_pipe": true, 00:14:11.921 "enable_quickack": false, 00:14:11.921 "enable_placement_id": 0, 00:14:11.921 "enable_zerocopy_send_server": true, 00:14:11.921 "enable_zerocopy_send_client": false, 00:14:11.921 "zerocopy_threshold": 0, 00:14:11.921 "tls_version": 0, 00:14:11.921 "enable_ktls": false 00:14:11.921 } 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "method": "sock_impl_set_options", 00:14:11.921 "params": { 00:14:11.921 "impl_name": "posix", 00:14:11.921 "recv_buf_size": 2097152, 00:14:11.921 "send_buf_size": 2097152, 00:14:11.921 "enable_recv_pipe": true, 00:14:11.921 "enable_quickack": false, 00:14:11.921 "enable_placement_id": 0, 00:14:11.921 "enable_zerocopy_send_server": true, 00:14:11.921 "enable_zerocopy_send_client": false, 00:14:11.921 "zerocopy_threshold": 0, 00:14:11.921 "tls_version": 0, 00:14:11.921 "enable_ktls": false 00:14:11.921 } 00:14:11.921 } 00:14:11.921 ] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "vmd", 00:14:11.921 "config": [] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "accel", 00:14:11.921 "config": [ 00:14:11.921 { 00:14:11.921 "method": "accel_set_options", 00:14:11.921 "params": { 00:14:11.921 "small_cache_size": 128, 00:14:11.921 "large_cache_size": 16, 00:14:11.921 "task_count": 2048, 00:14:11.921 "sequence_count": 2048, 00:14:11.921 "buf_count": 2048 00:14:11.921 } 00:14:11.921 } 00:14:11.921 ] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "bdev", 00:14:11.921 "config": [ 00:14:11.921 { 00:14:11.921 "method": "bdev_set_options", 00:14:11.921 "params": { 00:14:11.921 "bdev_io_pool_size": 65535, 00:14:11.921 "bdev_io_cache_size": 256, 00:14:11.921 "bdev_auto_examine": true, 00:14:11.921 "iobuf_small_cache_size": 128, 00:14:11.921 "iobuf_large_cache_size": 16 00:14:11.921 } 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "method": "bdev_raid_set_options", 00:14:11.921 "params": { 00:14:11.921 "process_window_size_kb": 1024, 00:14:11.921 "process_max_bandwidth_mb_sec": 0 00:14:11.921 } 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "method": "bdev_iscsi_set_options", 00:14:11.921 "params": { 00:14:11.921 "timeout_sec": 30 00:14:11.921 } 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "method": "bdev_nvme_set_options", 00:14:11.921 "params": { 00:14:11.921 "action_on_timeout": "none", 00:14:11.921 "timeout_us": 0, 00:14:11.921 "timeout_admin_us": 0, 00:14:11.921 "keep_alive_timeout_ms": 10000, 00:14:11.921 "arbitration_burst": 0, 00:14:11.921 "low_priority_weight": 0, 00:14:11.921 "medium_priority_weight": 0, 00:14:11.921 "high_priority_weight": 0, 00:14:11.921 "nvme_adminq_poll_period_us": 10000, 00:14:11.921 "nvme_ioq_poll_period_us": 0, 00:14:11.921 "io_queue_requests": 0, 00:14:11.921 "delay_cmd_submit": true, 00:14:11.921 "transport_retry_count": 4, 00:14:11.921 "bdev_retry_count": 3, 00:14:11.921 "transport_ack_timeout": 0, 00:14:11.921 "ctrlr_loss_timeout_sec": 0, 00:14:11.921 "reconnect_delay_sec": 0, 00:14:11.921 "fast_io_fail_timeout_sec": 0, 00:14:11.921 "disable_auto_failback": false, 00:14:11.921 "generate_uuids": false, 00:14:11.921 "transport_tos": 0, 00:14:11.921 "nvme_error_stat": false, 00:14:11.921 "rdma_srq_size": 0, 00:14:11.921 "io_path_stat": false, 00:14:11.921 "allow_accel_sequence": false, 00:14:11.921 "rdma_max_cq_size": 0, 00:14:11.921 "rdma_cm_event_timeout_ms": 0, 00:14:11.921 "dhchap_digests": [ 00:14:11.921 "sha256", 00:14:11.921 "sha384", 00:14:11.921 "sha512" 00:14:11.921 ], 00:14:11.921 "dhchap_dhgroups": [ 00:14:11.921 "null", 00:14:11.921 "ffdhe2048", 00:14:11.921 "ffdhe3072", 00:14:11.921 "ffdhe4096", 00:14:11.921 "ffdhe6144", 00:14:11.921 "ffdhe8192" 00:14:11.921 ] 00:14:11.921 } 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "method": "bdev_nvme_set_hotplug", 00:14:11.921 "params": { 00:14:11.921 "period_us": 100000, 00:14:11.921 "enable": false 00:14:11.921 } 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "method": "bdev_malloc_create", 00:14:11.921 "params": { 00:14:11.921 "name": "malloc0", 00:14:11.921 "num_blocks": 8192, 00:14:11.921 "block_size": 4096, 00:14:11.921 "physical_block_size": 4096, 00:14:11.921 "uuid": "31becea2-65c1-4875-8218-d987e2f21bae", 00:14:11.921 "optimal_io_boundary": 0, 00:14:11.921 "md_size": 0, 00:14:11.921 "dif_type": 0, 00:14:11.921 "dif_is_head_of_md": false, 00:14:11.921 "dif_pi_format": 0 00:14:11.921 } 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "method": "bdev_wait_for_examine" 00:14:11.921 } 00:14:11.921 ] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "scsi", 00:14:11.921 "config": null 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "scheduler", 00:14:11.921 "config": [ 00:14:11.921 { 00:14:11.921 "method": "framework_set_scheduler", 00:14:11.921 "params": { 00:14:11.921 "name": "static" 00:14:11.921 } 00:14:11.921 } 00:14:11.921 ] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "vhost_scsi", 00:14:11.921 "config": [] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "vhost_blk", 00:14:11.921 "config": [] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "ublk", 00:14:11.921 "config": [ 00:14:11.921 { 00:14:11.921 "method": "ublk_create_target", 00:14:11.921 "params": { 00:14:11.921 "cpumask": "1" 00:14:11.921 } 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "method": "ublk_start_disk", 00:14:11.921 "params": { 00:14:11.921 "bdev_name": "malloc0", 00:14:11.921 "ublk_id": 0, 00:14:11.921 "num_queues": 1, 00:14:11.921 "queue_depth": 128 00:14:11.921 } 00:14:11.921 } 00:14:11.921 ] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "nbd", 00:14:11.921 "config": [] 00:14:11.921 }, 00:14:11.921 { 00:14:11.921 "subsystem": "nvmf", 00:14:11.921 "config": [ 00:14:11.921 { 00:14:11.921 "method": "nvmf_set_config", 00:14:11.922 "params": { 00:14:11.922 "discovery_filter": "match_any", 00:14:11.922 "admin_cmd_passthru": { 00:14:11.922 "identify_ctrlr": false 00:14:11.922 }, 00:14:11.922 "dhchap_digests": [ 00:14:11.922 "sha256", 00:14:11.922 "sha384", 00:14:11.922 "sha512" 00:14:11.922 ], 00:14:11.922 "dhchap_dhgroups": [ 00:14:11.922 "null", 00:14:11.922 "ffdhe2048", 00:14:11.922 "ffdhe3072", 00:14:11.922 "ffdhe4096", 00:14:11.922 "ffdhe6144", 00:14:11.922 "ffdhe8192" 00:14:11.922 ] 00:14:11.922 } 00:14:11.922 }, 00:14:11.922 { 00:14:11.922 "method": "nvmf_set_max_subsystems", 00:14:11.922 "params": { 00:14:11.922 "max_subsystems": 1024 00:14:11.922 } 00:14:11.922 }, 00:14:11.922 { 00:14:11.922 "method": "nvmf_set_crdt", 00:14:11.922 "params": { 00:14:11.922 "crdt1": 0, 00:14:11.922 "crdt2": 0, 00:14:11.922 "crdt3": 0 00:14:11.922 } 00:14:11.922 } 00:14:11.922 ] 00:14:11.922 }, 00:14:11.922 { 00:14:11.922 "subsystem": "iscsi", 00:14:11.922 "config": [ 00:14:11.922 { 00:14:11.922 "method": "iscsi_set_options", 00:14:11.922 "params": { 00:14:11.922 "node_base": "iqn.2016-06.io.spdk", 00:14:11.922 "max_sessions": 128, 00:14:11.922 "max_connections_per_session": 2, 00:14:11.922 "max_queue_depth": 64, 00:14:11.922 "default_time2wait": 2, 00:14:11.922 "default_time2retain": 20, 00:14:11.922 "first_burst_length": 8192, 00:14:11.922 "immediate_data": true, 00:14:11.922 "allow_duplicated_isid": false, 00:14:11.922 "error_recovery_level": 0, 00:14:11.922 "nop_timeout": 60, 00:14:11.922 "nop_in_interval": 30, 00:14:11.922 "disable_chap": false, 00:14:11.922 "require_chap": false, 00:14:11.922 "mutual_chap": false, 00:14:11.922 "chap_group": 0, 00:14:11.922 "max_large_datain_per_connection": 64, 00:14:11.922 "max_r2t_per_connection": 4, 00:14:11.922 "pdu_pool_size": 36864, 00:14:11.922 "immediate_data_pool_size": 16384, 00:14:11.922 "data_out_pool_size": 2048 00:14:11.922 } 00:14:11.922 } 00:14:11.922 ] 00:14:11.922 } 00:14:11.922 ] 00:14:11.922 }' 00:14:11.922 [2024-11-19 06:38:03.482147] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:11.922 [2024-11-19 06:38:03.482230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70856 ] 00:14:11.922 [2024-11-19 06:38:03.632468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.922 [2024-11-19 06:38:03.744599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.867 [2024-11-19 06:38:04.603944] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:12.867 [2024-11-19 06:38:04.604796] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:12.867 [2024-11-19 06:38:04.612071] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:12.867 [2024-11-19 06:38:04.612151] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:12.867 [2024-11-19 06:38:04.612161] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:12.867 [2024-11-19 06:38:04.612168] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:12.867 [2024-11-19 06:38:04.621026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:12.867 [2024-11-19 06:38:04.621051] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:12.867 [2024-11-19 06:38:04.627953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:12.867 [2024-11-19 06:38:04.628050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:12.867 [2024-11-19 06:38:04.644945] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70856 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70856 ']' 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70856 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70856 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.867 killing process with pid 70856 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70856' 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70856 00:14:12.867 06:38:04 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70856 00:14:14.252 [2024-11-19 06:38:06.006440] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.252 [2024-11-19 06:38:06.035997] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.252 [2024-11-19 06:38:06.036098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.252 [2024-11-19 06:38:06.044952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.252 [2024-11-19 06:38:06.044991] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:14.252 [2024-11-19 06:38:06.044997] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:14.252 [2024-11-19 06:38:06.045017] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:14.252 [2024-11-19 06:38:06.045125] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:15.630 06:38:07 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:15.630 ************************************ 00:14:15.630 END TEST test_save_ublk_config 00:14:15.630 ************************************ 00:14:15.630 00:14:15.630 real 0m7.712s 00:14:15.630 user 0m5.271s 00:14:15.630 sys 0m3.062s 00:14:15.630 06:38:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.630 06:38:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:15.630 06:38:07 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70928 00:14:15.630 06:38:07 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:15.630 06:38:07 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70928 00:14:15.630 06:38:07 ublk -- common/autotest_common.sh@835 -- # '[' -z 70928 ']' 00:14:15.630 06:38:07 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.630 06:38:07 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.630 06:38:07 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.630 06:38:07 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.630 06:38:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.630 06:38:07 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:15.630 [2024-11-19 06:38:07.333976] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:15.630 [2024-11-19 06:38:07.334523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70928 ] 00:14:15.630 [2024-11-19 06:38:07.494097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:15.890 [2024-11-19 06:38:07.617893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.890 [2024-11-19 06:38:07.617902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.462 06:38:08 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.462 06:38:08 ublk -- common/autotest_common.sh@868 -- # return 0 00:14:16.462 06:38:08 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:16.462 06:38:08 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:16.462 06:38:08 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.462 06:38:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.462 ************************************ 00:14:16.462 START TEST test_create_ublk 00:14:16.462 ************************************ 00:14:16.462 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:14:16.462 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:16.462 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.462 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.462 [2024-11-19 06:38:08.337957] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:16.462 [2024-11-19 06:38:08.340498] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:16.462 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.462 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:16.462 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:16.462 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.462 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.721 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.721 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:16.721 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:16.721 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.721 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.721 [2024-11-19 06:38:08.562080] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:16.721 [2024-11-19 06:38:08.562445] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:16.721 [2024-11-19 06:38:08.562462] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:16.721 [2024-11-19 06:38:08.562470] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.721 [2024-11-19 06:38:08.571122] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.721 [2024-11-19 06:38:08.571145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.721 [2024-11-19 06:38:08.577950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.721 [2024-11-19 06:38:08.589010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:16.721 [2024-11-19 06:38:08.609966] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.721 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.721 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:16.721 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:16.721 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:16.721 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.721 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.721 06:38:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.721 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:16.721 { 00:14:16.721 "ublk_device": "/dev/ublkb0", 00:14:16.721 "id": 0, 00:14:16.721 "queue_depth": 512, 00:14:16.721 "num_queues": 4, 00:14:16.721 "bdev_name": "Malloc0" 00:14:16.721 } 00:14:16.721 ]' 00:14:16.721 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:16.979 06:38:08 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.979 06:38:08 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:17.237 fio: verification read phase will never start because write phase uses all of runtime 00:14:17.237 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:17.237 fio-3.35 00:14:17.237 Starting 1 process 00:14:27.200 00:14:27.200 fio_test: (groupid=0, jobs=1): err= 0: pid=70977: Tue Nov 19 06:38:19 2024 00:14:27.200 write: IOPS=20.4k, BW=79.7MiB/s (83.6MB/s)(797MiB/10001msec); 0 zone resets 00:14:27.200 clat (usec): min=32, max=3963, avg=48.20, stdev=81.36 00:14:27.200 lat (usec): min=32, max=3972, avg=48.65, stdev=81.37 00:14:27.200 clat percentiles (usec): 00:14:27.200 | 1.00th=[ 38], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 43], 00:14:27.200 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:14:27.200 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 51], 95.00th=[ 55], 00:14:27.200 | 99.00th=[ 65], 99.50th=[ 69], 99.90th=[ 1172], 99.95th=[ 2507], 00:14:27.200 | 99.99th=[ 3490] 00:14:27.200 bw ( KiB/s): min=77224, max=87592, per=100.00%, avg=81750.37, stdev=2273.38, samples=19 00:14:27.200 iops : min=19306, max=21898, avg=20437.58, stdev=568.34, samples=19 00:14:27.200 lat (usec) : 50=88.78%, 100=11.00%, 250=0.09%, 500=0.02%, 750=0.01% 00:14:27.200 lat (usec) : 1000=0.01% 00:14:27.200 lat (msec) : 2=0.04%, 4=0.07% 00:14:27.200 cpu : usr=3.05%, sys=15.85%, ctx=204154, majf=0, minf=796 00:14:27.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.200 issued rwts: total=0,204147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.200 00:14:27.200 Run status group 0 (all jobs): 00:14:27.200 WRITE: bw=79.7MiB/s (83.6MB/s), 79.7MiB/s-79.7MiB/s (83.6MB/s-83.6MB/s), io=797MiB (836MB), run=10001-10001msec 00:14:27.200 00:14:27.200 Disk stats (read/write): 00:14:27.200 ublkb0: ios=0/202050, merge=0/0, ticks=0/8139, in_queue=8140, util=99.09% 00:14:27.200 06:38:19 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.200 [2024-11-19 06:38:19.042469] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:27.200 [2024-11-19 06:38:19.070394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:27.200 [2024-11-19 06:38:19.071290] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:27.200 [2024-11-19 06:38:19.077951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:27.200 [2024-11-19 06:38:19.078178] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:27.200 [2024-11-19 06:38:19.078187] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.200 06:38:19 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.200 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.200 [2024-11-19 06:38:19.094012] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:27.200 request: 00:14:27.200 { 00:14:27.200 "ublk_id": 0, 00:14:27.200 "method": "ublk_stop_disk", 00:14:27.200 "req_id": 1 00:14:27.200 } 00:14:27.200 Got JSON-RPC error response 00:14:27.200 response: 00:14:27.200 { 00:14:27.201 "code": -19, 00:14:27.201 "message": "No such device" 00:14:27.201 } 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.201 06:38:19 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.201 [2024-11-19 06:38:19.110008] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:27.201 [2024-11-19 06:38:19.113583] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:27.201 [2024-11-19 06:38:19.113618] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.201 06:38:19 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.201 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.792 06:38:19 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:27.792 06:38:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:27.792 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.792 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.792 06:38:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:27.792 06:38:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:27.792 06:38:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:27.792 06:38:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:27.792 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.792 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.792 06:38:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:27.792 06:38:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:27.792 ************************************ 00:14:27.792 END TEST test_create_ublk 00:14:27.792 ************************************ 00:14:27.792 06:38:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:27.792 00:14:27.792 real 0m11.229s 00:14:27.792 user 0m0.625s 00:14:27.792 sys 0m1.653s 00:14:27.792 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.792 06:38:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 06:38:19 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:27.792 06:38:19 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.792 06:38:19 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.792 06:38:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 ************************************ 00:14:27.792 START TEST test_create_multi_ublk 00:14:27.792 ************************************ 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 [2024-11-19 06:38:19.607933] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:27.792 [2024-11-19 06:38:19.609490] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.792 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.049 [2024-11-19 06:38:19.824050] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:28.049 [2024-11-19 06:38:19.824340] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:28.049 [2024-11-19 06:38:19.824353] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:28.049 [2024-11-19 06:38:19.824362] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.049 [2024-11-19 06:38:19.835988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.049 [2024-11-19 06:38:19.836009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.049 [2024-11-19 06:38:19.847944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.049 [2024-11-19 06:38:19.848435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:28.049 [2024-11-19 06:38:19.855967] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.049 06:38:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.306 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.307 [2024-11-19 06:38:20.095045] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:28.307 [2024-11-19 06:38:20.095338] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:28.307 [2024-11-19 06:38:20.095350] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:28.307 [2024-11-19 06:38:20.095356] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.307 [2024-11-19 06:38:20.099183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.307 [2024-11-19 06:38:20.099200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.307 [2024-11-19 06:38:20.109952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.307 [2024-11-19 06:38:20.110442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:28.307 [2024-11-19 06:38:20.126957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.307 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.564 [2024-11-19 06:38:20.286026] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:28.564 [2024-11-19 06:38:20.286340] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:28.564 [2024-11-19 06:38:20.286355] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:28.564 [2024-11-19 06:38:20.286365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.564 [2024-11-19 06:38:20.293964] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.564 [2024-11-19 06:38:20.293986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.564 [2024-11-19 06:38:20.301946] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.564 [2024-11-19 06:38:20.302467] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:28.564 [2024-11-19 06:38:20.310967] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.564 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.564 [2024-11-19 06:38:20.470051] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:28.564 [2024-11-19 06:38:20.470362] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:28.565 [2024-11-19 06:38:20.470379] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:28.565 [2024-11-19 06:38:20.470387] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.565 [2024-11-19 06:38:20.477963] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.565 [2024-11-19 06:38:20.477983] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.565 [2024-11-19 06:38:20.485948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.565 [2024-11-19 06:38:20.486454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:28.565 [2024-11-19 06:38:20.494973] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:28.822 { 00:14:28.822 "ublk_device": "/dev/ublkb0", 00:14:28.822 "id": 0, 00:14:28.822 "queue_depth": 512, 00:14:28.822 "num_queues": 4, 00:14:28.822 "bdev_name": "Malloc0" 00:14:28.822 }, 00:14:28.822 { 00:14:28.822 "ublk_device": "/dev/ublkb1", 00:14:28.822 "id": 1, 00:14:28.822 "queue_depth": 512, 00:14:28.822 "num_queues": 4, 00:14:28.822 "bdev_name": "Malloc1" 00:14:28.822 }, 00:14:28.822 { 00:14:28.822 "ublk_device": "/dev/ublkb2", 00:14:28.822 "id": 2, 00:14:28.822 "queue_depth": 512, 00:14:28.822 "num_queues": 4, 00:14:28.822 "bdev_name": "Malloc2" 00:14:28.822 }, 00:14:28.822 { 00:14:28.822 "ublk_device": "/dev/ublkb3", 00:14:28.822 "id": 3, 00:14:28.822 "queue_depth": 512, 00:14:28.822 "num_queues": 4, 00:14:28.822 "bdev_name": "Malloc3" 00:14:28.822 } 00:14:28.822 ]' 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:28.822 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.080 06:38:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.337 [2024-11-19 06:38:21.150023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.337 [2024-11-19 06:38:21.183307] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.337 [2024-11-19 06:38:21.184435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.337 [2024-11-19 06:38:21.189958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.337 [2024-11-19 06:38:21.190190] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:29.337 [2024-11-19 06:38:21.190204] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.337 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.337 [2024-11-19 06:38:21.205996] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.338 [2024-11-19 06:38:21.246383] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.338 [2024-11-19 06:38:21.247366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.338 [2024-11-19 06:38:21.251958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.338 [2024-11-19 06:38:21.252180] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:29.338 [2024-11-19 06:38:21.252192] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:29.338 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.338 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.338 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:29.338 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.338 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.595 [2024-11-19 06:38:21.270016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.596 [2024-11-19 06:38:21.310368] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.596 [2024-11-19 06:38:21.311333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.596 [2024-11-19 06:38:21.317950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.596 [2024-11-19 06:38:21.318172] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:29.596 [2024-11-19 06:38:21.318184] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:29.596 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.596 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.596 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:29.596 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.596 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.596 [2024-11-19 06:38:21.334009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.596 [2024-11-19 06:38:21.374299] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.596 [2024-11-19 06:38:21.375277] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.596 [2024-11-19 06:38:21.389947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.596 [2024-11-19 06:38:21.390156] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:29.596 [2024-11-19 06:38:21.390164] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:29.596 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.596 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:29.853 [2024-11-19 06:38:21.581998] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:29.853 [2024-11-19 06:38:21.585598] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:29.853 [2024-11-19 06:38:21.585627] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:29.853 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:29.853 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.853 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:29.853 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.853 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.111 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.111 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.111 06:38:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:30.111 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.111 06:38:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.368 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.368 06:38:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.368 06:38:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:30.368 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.368 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.625 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.625 06:38:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.625 06:38:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:30.625 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.625 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:30.882 ************************************ 00:14:30.882 END TEST test_create_multi_ublk 00:14:30.882 ************************************ 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:30.882 00:14:30.882 real 0m3.192s 00:14:30.882 user 0m0.810s 00:14:30.882 sys 0m0.125s 00:14:30.882 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.883 06:38:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.883 06:38:22 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:30.883 06:38:22 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:30.883 06:38:22 ublk -- ublk/ublk.sh@130 -- # killprocess 70928 00:14:30.883 06:38:22 ublk -- common/autotest_common.sh@954 -- # '[' -z 70928 ']' 00:14:30.883 06:38:22 ublk -- common/autotest_common.sh@958 -- # kill -0 70928 00:14:30.883 06:38:22 ublk -- common/autotest_common.sh@959 -- # uname 00:14:31.141 06:38:22 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.141 06:38:22 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70928 00:14:31.141 killing process with pid 70928 00:14:31.141 06:38:22 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.141 06:38:22 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.141 06:38:22 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70928' 00:14:31.141 06:38:22 ublk -- common/autotest_common.sh@973 -- # kill 70928 00:14:31.141 06:38:22 ublk -- common/autotest_common.sh@978 -- # wait 70928 00:14:31.706 [2024-11-19 06:38:23.368190] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:31.706 [2024-11-19 06:38:23.368236] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:32.272 00:14:32.272 real 0m24.686s 00:14:32.272 user 0m35.105s 00:14:32.272 sys 0m9.825s 00:14:32.272 06:38:24 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.272 ************************************ 00:14:32.272 END TEST ublk 00:14:32.272 ************************************ 00:14:32.272 06:38:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.272 06:38:24 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:32.272 06:38:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.272 06:38:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.272 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:14:32.272 ************************************ 00:14:32.272 START TEST ublk_recovery 00:14:32.272 ************************************ 00:14:32.272 06:38:24 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:32.272 * Looking for test storage... 00:14:32.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:32.272 06:38:24 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:32.272 06:38:24 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:32.272 06:38:24 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:32.272 06:38:24 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:32.272 06:38:24 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:32.272 06:38:24 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:32.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.273 --rc genhtml_branch_coverage=1 00:14:32.273 --rc genhtml_function_coverage=1 00:14:32.273 --rc genhtml_legend=1 00:14:32.273 --rc geninfo_all_blocks=1 00:14:32.273 --rc geninfo_unexecuted_blocks=1 00:14:32.273 00:14:32.273 ' 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:32.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.273 --rc genhtml_branch_coverage=1 00:14:32.273 --rc genhtml_function_coverage=1 00:14:32.273 --rc genhtml_legend=1 00:14:32.273 --rc geninfo_all_blocks=1 00:14:32.273 --rc geninfo_unexecuted_blocks=1 00:14:32.273 00:14:32.273 ' 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:32.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.273 --rc genhtml_branch_coverage=1 00:14:32.273 --rc genhtml_function_coverage=1 00:14:32.273 --rc genhtml_legend=1 00:14:32.273 --rc geninfo_all_blocks=1 00:14:32.273 --rc geninfo_unexecuted_blocks=1 00:14:32.273 00:14:32.273 ' 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:32.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.273 --rc genhtml_branch_coverage=1 00:14:32.273 --rc genhtml_function_coverage=1 00:14:32.273 --rc genhtml_legend=1 00:14:32.273 --rc geninfo_all_blocks=1 00:14:32.273 --rc geninfo_unexecuted_blocks=1 00:14:32.273 00:14:32.273 ' 00:14:32.273 06:38:24 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:32.273 06:38:24 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:32.273 06:38:24 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:32.273 06:38:24 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:32.273 06:38:24 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:32.273 06:38:24 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:32.273 06:38:24 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:32.273 06:38:24 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:32.273 06:38:24 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:32.273 06:38:24 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:32.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.273 06:38:24 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71322 00:14:32.273 06:38:24 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:32.273 06:38:24 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71322 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71322 ']' 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.273 06:38:24 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:32.273 06:38:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.531 [2024-11-19 06:38:24.276433] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:32.531 [2024-11-19 06:38:24.276547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71322 ] 00:14:32.531 [2024-11-19 06:38:24.423359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:32.789 [2024-11-19 06:38:24.500655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.789 [2024-11-19 06:38:24.500744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:33.356 06:38:25 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.356 [2024-11-19 06:38:25.103943] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:33.356 [2024-11-19 06:38:25.105530] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.356 06:38:25 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.356 malloc0 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.356 06:38:25 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.356 [2024-11-19 06:38:25.184392] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:33.356 [2024-11-19 06:38:25.184474] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:33.356 [2024-11-19 06:38:25.184483] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:33.356 [2024-11-19 06:38:25.184490] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:33.356 [2024-11-19 06:38:25.193017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:33.356 [2024-11-19 06:38:25.193036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:33.356 [2024-11-19 06:38:25.199949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:33.356 [2024-11-19 06:38:25.200059] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:33.356 [2024-11-19 06:38:25.204034] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:33.356 1 00:14:33.356 06:38:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.356 06:38:25 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:34.289 06:38:26 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71352 00:14:34.289 06:38:26 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:34.290 06:38:26 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:34.547 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:34.547 fio-3.35 00:14:34.547 Starting 1 process 00:14:39.814 06:38:31 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71322 00:14:39.814 06:38:31 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:45.109 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71322 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:45.109 06:38:36 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71468 00:14:45.109 06:38:36 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:45.109 06:38:36 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:45.109 06:38:36 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71468 00:14:45.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.109 06:38:36 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71468 ']' 00:14:45.109 06:38:36 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.109 06:38:36 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.109 06:38:36 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.109 06:38:36 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.109 06:38:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.109 [2024-11-19 06:38:36.312859] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:45.109 [2024-11-19 06:38:36.313029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71468 ] 00:14:45.109 [2024-11-19 06:38:36.476626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.109 [2024-11-19 06:38:36.599678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.109 [2024-11-19 06:38:36.599767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.371 06:38:37 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.371 06:38:37 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:45.371 06:38:37 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:45.371 06:38:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.371 06:38:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.371 [2024-11-19 06:38:37.300952] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:45.631 [2024-11-19 06:38:37.303211] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:45.631 06:38:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.631 06:38:37 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:45.631 06:38:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.631 06:38:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.631 malloc0 00:14:45.631 06:38:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.631 06:38:37 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:45.631 06:38:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.631 06:38:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.631 [2024-11-19 06:38:37.420108] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:45.631 [2024-11-19 06:38:37.420158] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:45.631 [2024-11-19 06:38:37.420169] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:45.631 [2024-11-19 06:38:37.427982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:45.631 [2024-11-19 06:38:37.428017] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:45.631 1 00:14:45.631 06:38:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.631 06:38:37 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71352 00:14:46.571 [2024-11-19 06:38:38.428054] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:46.571 [2024-11-19 06:38:38.435954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:46.571 [2024-11-19 06:38:38.435973] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:47.575 [2024-11-19 06:38:39.435998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:47.575 [2024-11-19 06:38:39.443945] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:47.575 [2024-11-19 06:38:39.444027] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:48.521 [2024-11-19 06:38:40.444063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:48.521 [2024-11-19 06:38:40.451945] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:48.521 [2024-11-19 06:38:40.452016] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:48.521 [2024-11-19 06:38:40.452040] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:48.521 [2024-11-19 06:38:40.452148] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:10.448 [2024-11-19 06:39:01.729975] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:10.448 [2024-11-19 06:39:01.733390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:10.448 [2024-11-19 06:39:01.740135] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:10.448 [2024-11-19 06:39:01.740151] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:36.990 00:15:36.990 fio_test: (groupid=0, jobs=1): err= 0: pid=71359: Tue Nov 19 06:39:26 2024 00:15:36.990 read: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(3325MiB/60001msec) 00:15:36.990 slat (nsec): min=1162, max=203097, avg=5191.29, stdev=1567.86 00:15:36.990 clat (usec): min=719, max=30533k, avg=4788.71, stdev=282707.82 00:15:36.990 lat (usec): min=724, max=30533k, avg=4793.91, stdev=282707.81 00:15:36.990 clat percentiles (usec): 00:15:36.990 | 1.00th=[ 1729], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1926], 00:15:36.990 | 30.00th=[ 1975], 40.00th=[ 2008], 50.00th=[ 2073], 60.00th=[ 2114], 00:15:36.990 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 3097], 00:15:36.990 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 7635], 99.95th=[ 8291], 00:15:36.990 | 99.99th=[13304] 00:15:36.990 bw ( KiB/s): min=21616, max=130192, per=100.00%, avg=113690.31, stdev=17502.52, samples=59 00:15:36.990 iops : min= 5404, max=32548, avg=28422.58, stdev=4375.63, samples=59 00:15:36.990 write: IOPS=14.2k, BW=55.3MiB/s (58.0MB/s)(3321MiB/60001msec); 0 zone resets 00:15:36.990 slat (nsec): min=1141, max=1081.4k, avg=5323.12, stdev=1994.65 00:15:36.990 clat (usec): min=684, max=30533k, avg=4227.26, stdev=245550.69 00:15:36.990 lat (usec): min=686, max=30533k, avg=4232.58, stdev=245550.68 00:15:36.990 clat percentiles (usec): 00:15:36.990 | 1.00th=[ 1795], 5.00th=[ 1909], 10.00th=[ 1942], 20.00th=[ 2008], 00:15:36.990 | 30.00th=[ 2057], 40.00th=[ 2114], 50.00th=[ 2180], 60.00th=[ 2212], 00:15:36.990 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 2999], 00:15:36.990 | 99.00th=[ 5276], 99.50th=[ 5866], 99.90th=[ 7635], 99.95th=[ 8291], 00:15:36.990 | 99.99th=[13435] 00:15:36.990 bw ( KiB/s): min=21712, max=130560, per=100.00%, avg=113552.27, stdev=17632.76, samples=59 00:15:36.990 iops : min= 5428, max=32640, avg=28388.07, stdev=4408.19, samples=59 00:15:36.990 lat (usec) : 750=0.01%, 1000=0.01% 00:15:36.990 lat (msec) : 2=28.56%, 4=68.59%, 10=2.82%, 20=0.02%, >=2000=0.01% 00:15:36.990 cpu : usr=3.17%, sys=15.15%, ctx=56197, majf=0, minf=13 00:15:36.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:36.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:36.990 issued rwts: total=851255,850168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:36.990 00:15:36.990 Run status group 0 (all jobs): 00:15:36.990 READ: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=3325MiB (3487MB), run=60001-60001msec 00:15:36.990 WRITE: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=3321MiB (3482MB), run=60001-60001msec 00:15:36.990 00:15:36.990 Disk stats (read/write): 00:15:36.990 ublkb1: ios=848354/847217, merge=0/0, ticks=4026073/3472686, in_queue=7498759, util=99.89% 00:15:36.990 06:39:26 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:36.990 06:39:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.991 [2024-11-19 06:39:26.465006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:36.991 [2024-11-19 06:39:26.502971] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:36.991 [2024-11-19 06:39:26.503200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:36.991 [2024-11-19 06:39:26.511966] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:36.991 [2024-11-19 06:39:26.516033] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:36.991 [2024-11-19 06:39:26.516046] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.991 06:39:26 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.991 [2024-11-19 06:39:26.520083] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:36.991 [2024-11-19 06:39:26.526940] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:36.991 [2024-11-19 06:39:26.526971] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.991 06:39:26 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:36.991 06:39:26 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:36.991 06:39:26 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71468 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 71468 ']' 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 71468 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71468 00:15:36.991 killing process with pid 71468 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71468' 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@973 -- # kill 71468 00:15:36.991 06:39:26 ublk_recovery -- common/autotest_common.sh@978 -- # wait 71468 00:15:36.991 [2024-11-19 06:39:27.668417] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:36.991 [2024-11-19 06:39:27.668475] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:36.991 00:15:36.991 real 1m4.851s 00:15:36.991 user 1m47.952s 00:15:36.991 sys 0m22.062s 00:15:36.991 ************************************ 00:15:36.991 END TEST ublk_recovery 00:15:36.991 ************************************ 00:15:36.991 06:39:28 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.991 06:39:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.252 06:39:28 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:15:37.252 06:39:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:37.252 06:39:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.252 06:39:28 -- common/autotest_common.sh@10 -- # set +x 00:15:37.252 06:39:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:37.252 06:39:28 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:37.252 06:39:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:37.252 06:39:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.252 06:39:28 -- common/autotest_common.sh@10 -- # set +x 00:15:37.252 ************************************ 00:15:37.252 START TEST ftl 00:15:37.252 ************************************ 00:15:37.252 06:39:29 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:37.252 * Looking for test storage... 00:15:37.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:37.252 06:39:29 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:37.252 06:39:29 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:15:37.252 06:39:29 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:37.252 06:39:29 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:37.252 06:39:29 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.252 06:39:29 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.252 06:39:29 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.252 06:39:29 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.252 06:39:29 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.252 06:39:29 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.252 06:39:29 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.252 06:39:29 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.252 06:39:29 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.252 06:39:29 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.252 06:39:29 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.252 06:39:29 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:37.252 06:39:29 ftl -- scripts/common.sh@345 -- # : 1 00:15:37.252 06:39:29 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.252 06:39:29 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.252 06:39:29 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:37.252 06:39:29 ftl -- scripts/common.sh@353 -- # local d=1 00:15:37.253 06:39:29 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.253 06:39:29 ftl -- scripts/common.sh@355 -- # echo 1 00:15:37.253 06:39:29 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.253 06:39:29 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:37.253 06:39:29 ftl -- scripts/common.sh@353 -- # local d=2 00:15:37.253 06:39:29 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.253 06:39:29 ftl -- scripts/common.sh@355 -- # echo 2 00:15:37.253 06:39:29 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.253 06:39:29 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.253 06:39:29 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.253 06:39:29 ftl -- scripts/common.sh@368 -- # return 0 00:15:37.253 06:39:29 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.253 06:39:29 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:37.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.253 --rc genhtml_branch_coverage=1 00:15:37.253 --rc genhtml_function_coverage=1 00:15:37.253 --rc genhtml_legend=1 00:15:37.253 --rc geninfo_all_blocks=1 00:15:37.253 --rc geninfo_unexecuted_blocks=1 00:15:37.253 00:15:37.253 ' 00:15:37.253 06:39:29 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:37.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.253 --rc genhtml_branch_coverage=1 00:15:37.253 --rc genhtml_function_coverage=1 00:15:37.253 --rc genhtml_legend=1 00:15:37.253 --rc geninfo_all_blocks=1 00:15:37.253 --rc geninfo_unexecuted_blocks=1 00:15:37.253 00:15:37.253 ' 00:15:37.253 06:39:29 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:37.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.253 --rc genhtml_branch_coverage=1 00:15:37.253 --rc genhtml_function_coverage=1 00:15:37.253 --rc genhtml_legend=1 00:15:37.253 --rc geninfo_all_blocks=1 00:15:37.253 --rc geninfo_unexecuted_blocks=1 00:15:37.253 00:15:37.253 ' 00:15:37.253 06:39:29 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:37.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.253 --rc genhtml_branch_coverage=1 00:15:37.253 --rc genhtml_function_coverage=1 00:15:37.253 --rc genhtml_legend=1 00:15:37.253 --rc geninfo_all_blocks=1 00:15:37.253 --rc geninfo_unexecuted_blocks=1 00:15:37.253 00:15:37.253 ' 00:15:37.253 06:39:29 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:37.253 06:39:29 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:37.253 06:39:29 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:37.253 06:39:29 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:37.253 06:39:29 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:37.253 06:39:29 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:37.253 06:39:29 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.253 06:39:29 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:37.253 06:39:29 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:37.253 06:39:29 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.253 06:39:29 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.253 06:39:29 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:37.253 06:39:29 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:37.253 06:39:29 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:37.253 06:39:29 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:37.253 06:39:29 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:37.253 06:39:29 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:37.253 06:39:29 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.253 06:39:29 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.253 06:39:29 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:37.253 06:39:29 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:37.253 06:39:29 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:37.253 06:39:29 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:37.253 06:39:29 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:37.253 06:39:29 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:37.253 06:39:29 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:37.253 06:39:29 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:37.253 06:39:29 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.253 06:39:29 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.253 06:39:29 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.253 06:39:29 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:37.253 06:39:29 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:37.253 06:39:29 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:37.253 06:39:29 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:37.253 06:39:29 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:37.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:37.823 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:37.823 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:37.824 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:37.824 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:37.824 06:39:29 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72278 00:15:37.824 06:39:29 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72278 00:15:37.824 06:39:29 ftl -- common/autotest_common.sh@835 -- # '[' -z 72278 ']' 00:15:37.824 06:39:29 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.824 06:39:29 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.824 06:39:29 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.824 06:39:29 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.824 06:39:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:37.824 06:39:29 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:38.085 [2024-11-19 06:39:29.768570] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:38.085 [2024-11-19 06:39:29.768727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72278 ] 00:15:38.085 [2024-11-19 06:39:29.934327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.347 [2024-11-19 06:39:30.060133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.920 06:39:30 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.920 06:39:30 ftl -- common/autotest_common.sh@868 -- # return 0 00:15:38.920 06:39:30 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:38.920 06:39:30 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:39.868 06:39:31 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:39.868 06:39:31 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:40.441 06:39:32 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:40.441 06:39:32 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:40.441 06:39:32 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:40.703 06:39:32 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:40.703 06:39:32 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:40.703 06:39:32 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:40.703 06:39:32 ftl -- ftl/ftl.sh@50 -- # break 00:15:40.703 06:39:32 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:40.703 06:39:32 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:40.703 06:39:32 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:40.703 06:39:32 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:40.964 06:39:32 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:40.964 06:39:32 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:40.964 06:39:32 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:40.964 06:39:32 ftl -- ftl/ftl.sh@63 -- # break 00:15:40.964 06:39:32 ftl -- ftl/ftl.sh@66 -- # killprocess 72278 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@954 -- # '[' -z 72278 ']' 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@958 -- # kill -0 72278 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@959 -- # uname 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72278 00:15:40.964 killing process with pid 72278 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72278' 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@973 -- # kill 72278 00:15:40.964 06:39:32 ftl -- common/autotest_common.sh@978 -- # wait 72278 00:15:42.340 06:39:33 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:42.340 06:39:33 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:42.340 06:39:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:42.340 06:39:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.340 06:39:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:42.340 ************************************ 00:15:42.340 START TEST ftl_fio_basic 00:15:42.340 ************************************ 00:15:42.340 06:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:42.340 * Looking for test storage... 00:15:42.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.340 06:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:42.340 06:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:42.340 06:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.340 --rc genhtml_branch_coverage=1 00:15:42.340 --rc genhtml_function_coverage=1 00:15:42.340 --rc genhtml_legend=1 00:15:42.340 --rc geninfo_all_blocks=1 00:15:42.340 --rc geninfo_unexecuted_blocks=1 00:15:42.340 00:15:42.340 ' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.340 --rc genhtml_branch_coverage=1 00:15:42.340 --rc genhtml_function_coverage=1 00:15:42.340 --rc genhtml_legend=1 00:15:42.340 --rc geninfo_all_blocks=1 00:15:42.340 --rc geninfo_unexecuted_blocks=1 00:15:42.340 00:15:42.340 ' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.340 --rc genhtml_branch_coverage=1 00:15:42.340 --rc genhtml_function_coverage=1 00:15:42.340 --rc genhtml_legend=1 00:15:42.340 --rc geninfo_all_blocks=1 00:15:42.340 --rc geninfo_unexecuted_blocks=1 00:15:42.340 00:15:42.340 ' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.340 --rc genhtml_branch_coverage=1 00:15:42.340 --rc genhtml_function_coverage=1 00:15:42.340 --rc genhtml_legend=1 00:15:42.340 --rc geninfo_all_blocks=1 00:15:42.340 --rc geninfo_unexecuted_blocks=1 00:15:42.340 00:15:42.340 ' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72418 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72418 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 72418 ']' 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.340 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.340 [2024-11-19 06:39:34.137260] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:42.340 [2024-11-19 06:39:34.137597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72418 ] 00:15:42.599 [2024-11-19 06:39:34.297605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:42.599 [2024-11-19 06:39:34.385101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.599 [2024-11-19 06:39:34.385368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.599 [2024-11-19 06:39:34.385389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.166 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.166 06:39:34 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:15:43.166 06:39:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:43.166 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:43.166 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:43.166 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:43.166 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:43.166 06:39:34 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:43.425 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:43.425 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:43.425 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:43.425 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:43.425 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:43.425 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:43.425 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:43.425 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:43.683 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:43.683 { 00:15:43.683 "name": "nvme0n1", 00:15:43.683 "aliases": [ 00:15:43.683 "4426dc09-e80e-4d7a-8a72-e0452c41c3b9" 00:15:43.683 ], 00:15:43.683 "product_name": "NVMe disk", 00:15:43.683 "block_size": 4096, 00:15:43.683 "num_blocks": 1310720, 00:15:43.683 "uuid": "4426dc09-e80e-4d7a-8a72-e0452c41c3b9", 00:15:43.683 "numa_id": -1, 00:15:43.683 "assigned_rate_limits": { 00:15:43.683 "rw_ios_per_sec": 0, 00:15:43.683 "rw_mbytes_per_sec": 0, 00:15:43.683 "r_mbytes_per_sec": 0, 00:15:43.683 "w_mbytes_per_sec": 0 00:15:43.683 }, 00:15:43.683 "claimed": false, 00:15:43.683 "zoned": false, 00:15:43.683 "supported_io_types": { 00:15:43.683 "read": true, 00:15:43.683 "write": true, 00:15:43.683 "unmap": true, 00:15:43.683 "flush": true, 00:15:43.683 "reset": true, 00:15:43.683 "nvme_admin": true, 00:15:43.683 "nvme_io": true, 00:15:43.683 "nvme_io_md": false, 00:15:43.683 "write_zeroes": true, 00:15:43.683 "zcopy": false, 00:15:43.683 "get_zone_info": false, 00:15:43.683 "zone_management": false, 00:15:43.683 "zone_append": false, 00:15:43.683 "compare": true, 00:15:43.683 "compare_and_write": false, 00:15:43.683 "abort": true, 00:15:43.683 "seek_hole": false, 00:15:43.683 "seek_data": false, 00:15:43.683 "copy": true, 00:15:43.683 "nvme_iov_md": false 00:15:43.683 }, 00:15:43.683 "driver_specific": { 00:15:43.683 "nvme": [ 00:15:43.683 { 00:15:43.684 "pci_address": "0000:00:11.0", 00:15:43.684 "trid": { 00:15:43.684 "trtype": "PCIe", 00:15:43.684 "traddr": "0000:00:11.0" 00:15:43.684 }, 00:15:43.684 "ctrlr_data": { 00:15:43.684 "cntlid": 0, 00:15:43.684 "vendor_id": "0x1b36", 00:15:43.684 "model_number": "QEMU NVMe Ctrl", 00:15:43.684 "serial_number": "12341", 00:15:43.684 "firmware_revision": "8.0.0", 00:15:43.684 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:43.684 "oacs": { 00:15:43.684 "security": 0, 00:15:43.684 "format": 1, 00:15:43.684 "firmware": 0, 00:15:43.684 "ns_manage": 1 00:15:43.684 }, 00:15:43.684 "multi_ctrlr": false, 00:15:43.684 "ana_reporting": false 00:15:43.684 }, 00:15:43.684 "vs": { 00:15:43.684 "nvme_version": "1.4" 00:15:43.684 }, 00:15:43.684 "ns_data": { 00:15:43.684 "id": 1, 00:15:43.684 "can_share": false 00:15:43.684 } 00:15:43.684 } 00:15:43.684 ], 00:15:43.684 "mp_policy": "active_passive" 00:15:43.684 } 00:15:43.684 } 00:15:43.684 ]' 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:43.684 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:43.942 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:43.942 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:44.201 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=aeec755d-b30b-4e22-ab1a-e36e71982a0b 00:15:44.201 06:39:35 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aeec755d-b30b-4e22-ab1a-e36e71982a0b 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:44.201 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:44.460 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:44.460 { 00:15:44.460 "name": "b7b4a9fa-fc1d-4a64-8f30-bf017699dae2", 00:15:44.460 "aliases": [ 00:15:44.460 "lvs/nvme0n1p0" 00:15:44.460 ], 00:15:44.460 "product_name": "Logical Volume", 00:15:44.460 "block_size": 4096, 00:15:44.460 "num_blocks": 26476544, 00:15:44.460 "uuid": "b7b4a9fa-fc1d-4a64-8f30-bf017699dae2", 00:15:44.460 "assigned_rate_limits": { 00:15:44.460 "rw_ios_per_sec": 0, 00:15:44.460 "rw_mbytes_per_sec": 0, 00:15:44.460 "r_mbytes_per_sec": 0, 00:15:44.460 "w_mbytes_per_sec": 0 00:15:44.460 }, 00:15:44.460 "claimed": false, 00:15:44.460 "zoned": false, 00:15:44.460 "supported_io_types": { 00:15:44.460 "read": true, 00:15:44.460 "write": true, 00:15:44.460 "unmap": true, 00:15:44.460 "flush": false, 00:15:44.460 "reset": true, 00:15:44.460 "nvme_admin": false, 00:15:44.460 "nvme_io": false, 00:15:44.460 "nvme_io_md": false, 00:15:44.460 "write_zeroes": true, 00:15:44.460 "zcopy": false, 00:15:44.460 "get_zone_info": false, 00:15:44.460 "zone_management": false, 00:15:44.460 "zone_append": false, 00:15:44.460 "compare": false, 00:15:44.460 "compare_and_write": false, 00:15:44.460 "abort": false, 00:15:44.460 "seek_hole": true, 00:15:44.460 "seek_data": true, 00:15:44.460 "copy": false, 00:15:44.460 "nvme_iov_md": false 00:15:44.460 }, 00:15:44.460 "driver_specific": { 00:15:44.460 "lvol": { 00:15:44.460 "lvol_store_uuid": "aeec755d-b30b-4e22-ab1a-e36e71982a0b", 00:15:44.460 "base_bdev": "nvme0n1", 00:15:44.460 "thin_provision": true, 00:15:44.460 "num_allocated_clusters": 0, 00:15:44.460 "snapshot": false, 00:15:44.460 "clone": false, 00:15:44.460 "esnap_clone": false 00:15:44.460 } 00:15:44.460 } 00:15:44.460 } 00:15:44.460 ]' 00:15:44.460 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:44.460 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:44.460 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:44.461 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:44.461 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:44.461 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:44.461 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:44.461 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:44.461 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:44.719 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:44.719 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:44.719 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:44.719 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:44.719 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:44.719 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:44.719 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:44.719 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:44.978 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:44.978 { 00:15:44.978 "name": "b7b4a9fa-fc1d-4a64-8f30-bf017699dae2", 00:15:44.978 "aliases": [ 00:15:44.978 "lvs/nvme0n1p0" 00:15:44.978 ], 00:15:44.978 "product_name": "Logical Volume", 00:15:44.978 "block_size": 4096, 00:15:44.978 "num_blocks": 26476544, 00:15:44.978 "uuid": "b7b4a9fa-fc1d-4a64-8f30-bf017699dae2", 00:15:44.978 "assigned_rate_limits": { 00:15:44.978 "rw_ios_per_sec": 0, 00:15:44.978 "rw_mbytes_per_sec": 0, 00:15:44.978 "r_mbytes_per_sec": 0, 00:15:44.978 "w_mbytes_per_sec": 0 00:15:44.978 }, 00:15:44.978 "claimed": false, 00:15:44.978 "zoned": false, 00:15:44.978 "supported_io_types": { 00:15:44.978 "read": true, 00:15:44.978 "write": true, 00:15:44.978 "unmap": true, 00:15:44.978 "flush": false, 00:15:44.978 "reset": true, 00:15:44.978 "nvme_admin": false, 00:15:44.978 "nvme_io": false, 00:15:44.978 "nvme_io_md": false, 00:15:44.978 "write_zeroes": true, 00:15:44.978 "zcopy": false, 00:15:44.978 "get_zone_info": false, 00:15:44.978 "zone_management": false, 00:15:44.978 "zone_append": false, 00:15:44.978 "compare": false, 00:15:44.978 "compare_and_write": false, 00:15:44.978 "abort": false, 00:15:44.978 "seek_hole": true, 00:15:44.978 "seek_data": true, 00:15:44.978 "copy": false, 00:15:44.978 "nvme_iov_md": false 00:15:44.978 }, 00:15:44.978 "driver_specific": { 00:15:44.978 "lvol": { 00:15:44.978 "lvol_store_uuid": "aeec755d-b30b-4e22-ab1a-e36e71982a0b", 00:15:44.978 "base_bdev": "nvme0n1", 00:15:44.978 "thin_provision": true, 00:15:44.978 "num_allocated_clusters": 0, 00:15:44.978 "snapshot": false, 00:15:44.978 "clone": false, 00:15:44.978 "esnap_clone": false 00:15:44.978 } 00:15:44.978 } 00:15:44.978 } 00:15:44.978 ]' 00:15:44.978 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:44.978 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:44.978 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:44.978 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:44.978 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:44.978 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:44.978 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:44.978 06:39:36 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:45.236 06:39:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:45.236 06:39:36 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:45.236 06:39:36 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:45.236 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:45.236 06:39:36 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:45.236 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:45.236 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:45.236 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:45.236 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:45.236 06:39:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:45.495 { 00:15:45.495 "name": "b7b4a9fa-fc1d-4a64-8f30-bf017699dae2", 00:15:45.495 "aliases": [ 00:15:45.495 "lvs/nvme0n1p0" 00:15:45.495 ], 00:15:45.495 "product_name": "Logical Volume", 00:15:45.495 "block_size": 4096, 00:15:45.495 "num_blocks": 26476544, 00:15:45.495 "uuid": "b7b4a9fa-fc1d-4a64-8f30-bf017699dae2", 00:15:45.495 "assigned_rate_limits": { 00:15:45.495 "rw_ios_per_sec": 0, 00:15:45.495 "rw_mbytes_per_sec": 0, 00:15:45.495 "r_mbytes_per_sec": 0, 00:15:45.495 "w_mbytes_per_sec": 0 00:15:45.495 }, 00:15:45.495 "claimed": false, 00:15:45.495 "zoned": false, 00:15:45.495 "supported_io_types": { 00:15:45.495 "read": true, 00:15:45.495 "write": true, 00:15:45.495 "unmap": true, 00:15:45.495 "flush": false, 00:15:45.495 "reset": true, 00:15:45.495 "nvme_admin": false, 00:15:45.495 "nvme_io": false, 00:15:45.495 "nvme_io_md": false, 00:15:45.495 "write_zeroes": true, 00:15:45.495 "zcopy": false, 00:15:45.495 "get_zone_info": false, 00:15:45.495 "zone_management": false, 00:15:45.495 "zone_append": false, 00:15:45.495 "compare": false, 00:15:45.495 "compare_and_write": false, 00:15:45.495 "abort": false, 00:15:45.495 "seek_hole": true, 00:15:45.495 "seek_data": true, 00:15:45.495 "copy": false, 00:15:45.495 "nvme_iov_md": false 00:15:45.495 }, 00:15:45.495 "driver_specific": { 00:15:45.495 "lvol": { 00:15:45.495 "lvol_store_uuid": "aeec755d-b30b-4e22-ab1a-e36e71982a0b", 00:15:45.495 "base_bdev": "nvme0n1", 00:15:45.495 "thin_provision": true, 00:15:45.495 "num_allocated_clusters": 0, 00:15:45.495 "snapshot": false, 00:15:45.495 "clone": false, 00:15:45.495 "esnap_clone": false 00:15:45.495 } 00:15:45.495 } 00:15:45.495 } 00:15:45.495 ]' 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:45.495 06:39:37 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b7b4a9fa-fc1d-4a64-8f30-bf017699dae2 -c nvc0n1p0 --l2p_dram_limit 60 00:15:45.755 [2024-11-19 06:39:37.463158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.463191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:45.755 [2024-11-19 06:39:37.463204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:45.755 [2024-11-19 06:39:37.463211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.463268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.463278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:45.755 [2024-11-19 06:39:37.463287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:45.755 [2024-11-19 06:39:37.463293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.463325] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:45.755 [2024-11-19 06:39:37.463941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:45.755 [2024-11-19 06:39:37.463970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.463982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:45.755 [2024-11-19 06:39:37.463994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:15:45.755 [2024-11-19 06:39:37.464002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.464043] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b53a28ba-2814-46a8-91ae-cd7cbaff145c 00:15:45.755 [2024-11-19 06:39:37.465040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.465144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:45.755 [2024-11-19 06:39:37.465158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:15:45.755 [2024-11-19 06:39:37.465167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.469941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.469965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:45.755 [2024-11-19 06:39:37.469974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:15:45.755 [2024-11-19 06:39:37.469981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.470064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.470077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:45.755 [2024-11-19 06:39:37.470085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:15:45.755 [2024-11-19 06:39:37.470094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.470131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.470141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:45.755 [2024-11-19 06:39:37.470148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:45.755 [2024-11-19 06:39:37.470155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.470182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:45.755 [2024-11-19 06:39:37.473030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.473051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:45.755 [2024-11-19 06:39:37.473063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.850 ms 00:15:45.755 [2024-11-19 06:39:37.473071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.473105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.473112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:45.755 [2024-11-19 06:39:37.473120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:45.755 [2024-11-19 06:39:37.473126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.473160] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:45.755 [2024-11-19 06:39:37.473276] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:45.755 [2024-11-19 06:39:37.473292] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:45.755 [2024-11-19 06:39:37.473302] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:45.755 [2024-11-19 06:39:37.473312] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:45.755 [2024-11-19 06:39:37.473320] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:45.755 [2024-11-19 06:39:37.473327] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:45.755 [2024-11-19 06:39:37.473333] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:45.755 [2024-11-19 06:39:37.473341] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:45.755 [2024-11-19 06:39:37.473347] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:45.755 [2024-11-19 06:39:37.473354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.473362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:45.755 [2024-11-19 06:39:37.473371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:15:45.755 [2024-11-19 06:39:37.473377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.473458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.755 [2024-11-19 06:39:37.473465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:45.755 [2024-11-19 06:39:37.473473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:15:45.755 [2024-11-19 06:39:37.473478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.755 [2024-11-19 06:39:37.473575] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:45.755 [2024-11-19 06:39:37.473583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:45.755 [2024-11-19 06:39:37.473592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:45.755 [2024-11-19 06:39:37.473598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.755 [2024-11-19 06:39:37.473606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:45.755 [2024-11-19 06:39:37.473611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:45.755 [2024-11-19 06:39:37.473619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:45.755 [2024-11-19 06:39:37.473625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:45.755 [2024-11-19 06:39:37.473632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:45.755 [2024-11-19 06:39:37.473637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:45.755 [2024-11-19 06:39:37.473644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:45.755 [2024-11-19 06:39:37.473650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:45.755 [2024-11-19 06:39:37.473657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:45.755 [2024-11-19 06:39:37.473663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:45.755 [2024-11-19 06:39:37.473669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:45.755 [2024-11-19 06:39:37.473675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.755 [2024-11-19 06:39:37.473685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:45.756 [2024-11-19 06:39:37.473690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:45.756 [2024-11-19 06:39:37.473701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.756 [2024-11-19 06:39:37.473706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:45.756 [2024-11-19 06:39:37.473713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:45.756 [2024-11-19 06:39:37.473718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:45.756 [2024-11-19 06:39:37.473724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:45.756 [2024-11-19 06:39:37.473729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:45.756 [2024-11-19 06:39:37.473736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:45.756 [2024-11-19 06:39:37.473741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:45.756 [2024-11-19 06:39:37.473747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:45.756 [2024-11-19 06:39:37.473753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:45.756 [2024-11-19 06:39:37.473759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:45.756 [2024-11-19 06:39:37.473764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:45.756 [2024-11-19 06:39:37.473770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:45.756 [2024-11-19 06:39:37.473775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:45.756 [2024-11-19 06:39:37.473783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:45.756 [2024-11-19 06:39:37.473788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:45.756 [2024-11-19 06:39:37.473795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:45.756 [2024-11-19 06:39:37.473808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:45.756 [2024-11-19 06:39:37.473815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:45.756 [2024-11-19 06:39:37.473820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:45.756 [2024-11-19 06:39:37.473826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:45.756 [2024-11-19 06:39:37.473832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.756 [2024-11-19 06:39:37.473838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:45.756 [2024-11-19 06:39:37.473843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:45.756 [2024-11-19 06:39:37.473850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.756 [2024-11-19 06:39:37.473856] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:45.756 [2024-11-19 06:39:37.473863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:45.756 [2024-11-19 06:39:37.473869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:45.756 [2024-11-19 06:39:37.473876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.756 [2024-11-19 06:39:37.473883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:45.756 [2024-11-19 06:39:37.473891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:45.756 [2024-11-19 06:39:37.473896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:45.756 [2024-11-19 06:39:37.473904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:45.756 [2024-11-19 06:39:37.473909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:45.756 [2024-11-19 06:39:37.473916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:45.756 [2024-11-19 06:39:37.473936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:45.756 [2024-11-19 06:39:37.473950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:45.756 [2024-11-19 06:39:37.473961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:45.756 [2024-11-19 06:39:37.473972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:45.756 [2024-11-19 06:39:37.473981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:45.756 [2024-11-19 06:39:37.473998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:45.756 [2024-11-19 06:39:37.474007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:45.756 [2024-11-19 06:39:37.474018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:45.756 [2024-11-19 06:39:37.474024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:45.756 [2024-11-19 06:39:37.474032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:45.756 [2024-11-19 06:39:37.474038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:45.756 [2024-11-19 06:39:37.474046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:45.756 [2024-11-19 06:39:37.474052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:45.756 [2024-11-19 06:39:37.474059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:45.756 [2024-11-19 06:39:37.474065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:45.756 [2024-11-19 06:39:37.474072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:45.756 [2024-11-19 06:39:37.474077] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:45.756 [2024-11-19 06:39:37.474085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:45.756 [2024-11-19 06:39:37.474093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:45.756 [2024-11-19 06:39:37.474100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:45.756 [2024-11-19 06:39:37.474106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:45.756 [2024-11-19 06:39:37.474113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:45.756 [2024-11-19 06:39:37.474120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.756 [2024-11-19 06:39:37.474127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:45.756 [2024-11-19 06:39:37.474133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:15:45.756 [2024-11-19 06:39:37.474141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.756 [2024-11-19 06:39:37.474204] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:45.756 [2024-11-19 06:39:37.474219] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:47.658 [2024-11-19 06:39:39.527607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.658 [2024-11-19 06:39:39.527811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:47.658 [2024-11-19 06:39:39.527832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2053.393 ms 00:15:47.658 [2024-11-19 06:39:39.527841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.658 [2024-11-19 06:39:39.548439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.658 [2024-11-19 06:39:39.548547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:47.658 [2024-11-19 06:39:39.548600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.378 ms 00:15:47.658 [2024-11-19 06:39:39.548622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.658 [2024-11-19 06:39:39.548743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.658 [2024-11-19 06:39:39.548770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:47.658 [2024-11-19 06:39:39.548787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:47.658 [2024-11-19 06:39:39.548806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.658 [2024-11-19 06:39:39.582694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.658 [2024-11-19 06:39:39.582833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:47.658 [2024-11-19 06:39:39.582907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.790 ms 00:15:47.658 [2024-11-19 06:39:39.582960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.658 [2024-11-19 06:39:39.583042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.658 [2024-11-19 06:39:39.583076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:47.658 [2024-11-19 06:39:39.583150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:47.658 [2024-11-19 06:39:39.583177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.658 [2024-11-19 06:39:39.583563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.658 [2024-11-19 06:39:39.583614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:47.658 [2024-11-19 06:39:39.583746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:15:47.658 [2024-11-19 06:39:39.583780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.658 [2024-11-19 06:39:39.583938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.658 [2024-11-19 06:39:39.583969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:47.658 [2024-11-19 06:39:39.584032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:15:47.658 [2024-11-19 06:39:39.584064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.599448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.599568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:47.917 [2024-11-19 06:39:39.599698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.341 ms 00:15:47.917 [2024-11-19 06:39:39.599736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.608713] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:47.917 [2024-11-19 06:39:39.621011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.621107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:47.917 [2024-11-19 06:39:39.621147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.057 ms 00:15:47.917 [2024-11-19 06:39:39.621167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.667295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.667405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:47.917 [2024-11-19 06:39:39.667467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.088 ms 00:15:47.917 [2024-11-19 06:39:39.667486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.667689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.667747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:47.917 [2024-11-19 06:39:39.667790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:15:47.917 [2024-11-19 06:39:39.667808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.685406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.685494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:47.917 [2024-11-19 06:39:39.685544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.541 ms 00:15:47.917 [2024-11-19 06:39:39.685563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.702699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.702786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:47.917 [2024-11-19 06:39:39.702840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.090 ms 00:15:47.917 [2024-11-19 06:39:39.702857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.703318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.703394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:47.917 [2024-11-19 06:39:39.703445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:15:47.917 [2024-11-19 06:39:39.703464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.753519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.753611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:47.917 [2024-11-19 06:39:39.753660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.001 ms 00:15:47.917 [2024-11-19 06:39:39.753681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.772834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.772921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:47.917 [2024-11-19 06:39:39.773013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.067 ms 00:15:47.917 [2024-11-19 06:39:39.773037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.790637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.790727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:47.917 [2024-11-19 06:39:39.790772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.550 ms 00:15:47.917 [2024-11-19 06:39:39.790790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.808545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.808631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:47.917 [2024-11-19 06:39:39.808672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.707 ms 00:15:47.917 [2024-11-19 06:39:39.808689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.808736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.808809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:47.917 [2024-11-19 06:39:39.808855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:47.917 [2024-11-19 06:39:39.808876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.809006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.917 [2024-11-19 06:39:39.809064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:47.917 [2024-11-19 06:39:39.809107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:47.917 [2024-11-19 06:39:39.809126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.917 [2024-11-19 06:39:39.809938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2346.441 ms, result 0 00:15:47.917 { 00:15:47.917 "name": "ftl0", 00:15:47.917 "uuid": "b53a28ba-2814-46a8-91ae-cd7cbaff145c" 00:15:47.917 } 00:15:47.917 06:39:39 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:47.917 06:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:15:47.917 06:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.917 06:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:15:47.917 06:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.917 06:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.917 06:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:48.177 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:48.438 [ 00:15:48.438 { 00:15:48.438 "name": "ftl0", 00:15:48.438 "aliases": [ 00:15:48.438 "b53a28ba-2814-46a8-91ae-cd7cbaff145c" 00:15:48.438 ], 00:15:48.438 "product_name": "FTL disk", 00:15:48.438 "block_size": 4096, 00:15:48.438 "num_blocks": 20971520, 00:15:48.438 "uuid": "b53a28ba-2814-46a8-91ae-cd7cbaff145c", 00:15:48.438 "assigned_rate_limits": { 00:15:48.438 "rw_ios_per_sec": 0, 00:15:48.438 "rw_mbytes_per_sec": 0, 00:15:48.438 "r_mbytes_per_sec": 0, 00:15:48.438 "w_mbytes_per_sec": 0 00:15:48.438 }, 00:15:48.438 "claimed": false, 00:15:48.438 "zoned": false, 00:15:48.438 "supported_io_types": { 00:15:48.438 "read": true, 00:15:48.438 "write": true, 00:15:48.438 "unmap": true, 00:15:48.438 "flush": true, 00:15:48.438 "reset": false, 00:15:48.438 "nvme_admin": false, 00:15:48.438 "nvme_io": false, 00:15:48.438 "nvme_io_md": false, 00:15:48.438 "write_zeroes": true, 00:15:48.438 "zcopy": false, 00:15:48.438 "get_zone_info": false, 00:15:48.438 "zone_management": false, 00:15:48.438 "zone_append": false, 00:15:48.438 "compare": false, 00:15:48.438 "compare_and_write": false, 00:15:48.438 "abort": false, 00:15:48.438 "seek_hole": false, 00:15:48.438 "seek_data": false, 00:15:48.438 "copy": false, 00:15:48.438 "nvme_iov_md": false 00:15:48.438 }, 00:15:48.438 "driver_specific": { 00:15:48.438 "ftl": { 00:15:48.438 "base_bdev": "b7b4a9fa-fc1d-4a64-8f30-bf017699dae2", 00:15:48.438 "cache": "nvc0n1p0" 00:15:48.438 } 00:15:48.438 } 00:15:48.438 } 00:15:48.438 ] 00:15:48.438 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:15:48.438 06:39:40 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:48.438 06:39:40 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:48.698 06:39:40 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:48.698 06:39:40 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:48.698 [2024-11-19 06:39:40.627392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.698 [2024-11-19 06:39:40.627441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:48.698 [2024-11-19 06:39:40.627456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:48.698 [2024-11-19 06:39:40.627467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.698 [2024-11-19 06:39:40.627506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:48.958 [2024-11-19 06:39:40.630122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.958 [2024-11-19 06:39:40.630149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:48.958 [2024-11-19 06:39:40.630163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:15:48.958 [2024-11-19 06:39:40.630171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.958 [2024-11-19 06:39:40.630732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.958 [2024-11-19 06:39:40.630748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:48.958 [2024-11-19 06:39:40.630760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:15:48.958 [2024-11-19 06:39:40.630767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.958 [2024-11-19 06:39:40.634024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.958 [2024-11-19 06:39:40.634044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:48.958 [2024-11-19 06:39:40.634055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:15:48.958 [2024-11-19 06:39:40.634063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.958 [2024-11-19 06:39:40.640424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.958 [2024-11-19 06:39:40.640520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:48.959 [2024-11-19 06:39:40.640582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.330 ms 00:15:48.959 [2024-11-19 06:39:40.640610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.959 [2024-11-19 06:39:40.664501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.959 [2024-11-19 06:39:40.664614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:48.959 [2024-11-19 06:39:40.664676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.785 ms 00:15:48.959 [2024-11-19 06:39:40.664700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.959 [2024-11-19 06:39:40.679576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.959 [2024-11-19 06:39:40.679688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:48.959 [2024-11-19 06:39:40.679748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.774 ms 00:15:48.959 [2024-11-19 06:39:40.679778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.959 [2024-11-19 06:39:40.680038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.959 [2024-11-19 06:39:40.680077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:48.959 [2024-11-19 06:39:40.680101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:15:48.959 [2024-11-19 06:39:40.680157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.959 [2024-11-19 06:39:40.703371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.959 [2024-11-19 06:39:40.703481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:48.959 [2024-11-19 06:39:40.703537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.172 ms 00:15:48.959 [2024-11-19 06:39:40.703564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.959 [2024-11-19 06:39:40.727023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.959 [2024-11-19 06:39:40.727141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:48.959 [2024-11-19 06:39:40.727204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.161 ms 00:15:48.959 [2024-11-19 06:39:40.727233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.959 [2024-11-19 06:39:40.749875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.959 [2024-11-19 06:39:40.749995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:48.959 [2024-11-19 06:39:40.750054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.542 ms 00:15:48.959 [2024-11-19 06:39:40.750082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.959 [2024-11-19 06:39:40.772676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.959 [2024-11-19 06:39:40.772774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:48.959 [2024-11-19 06:39:40.772828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.438 ms 00:15:48.959 [2024-11-19 06:39:40.772851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.959 [2024-11-19 06:39:40.773315] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:48.959 [2024-11-19 06:39:40.773383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.773969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.774283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.774334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.774372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.774701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.774768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.774805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.774841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.774934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.774977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.775985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.776121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:48.959 [2024-11-19 06:39:40.776155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.776966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.777995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:48.960 [2024-11-19 06:39:40.778092] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:48.960 [2024-11-19 06:39:40.778103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b53a28ba-2814-46a8-91ae-cd7cbaff145c 00:15:48.960 [2024-11-19 06:39:40.778111] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:48.960 [2024-11-19 06:39:40.778122] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:48.960 [2024-11-19 06:39:40.778129] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:48.960 [2024-11-19 06:39:40.778140] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:48.960 [2024-11-19 06:39:40.778149] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:48.960 [2024-11-19 06:39:40.778158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:48.960 [2024-11-19 06:39:40.778166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:48.960 [2024-11-19 06:39:40.778174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:48.960 [2024-11-19 06:39:40.778181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:48.960 [2024-11-19 06:39:40.778192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.960 [2024-11-19 06:39:40.778200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:48.960 [2024-11-19 06:39:40.778211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:15:48.960 [2024-11-19 06:39:40.778219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.960 [2024-11-19 06:39:40.790554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.960 [2024-11-19 06:39:40.790583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:48.960 [2024-11-19 06:39:40.790610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.280 ms 00:15:48.960 [2024-11-19 06:39:40.790618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.960 [2024-11-19 06:39:40.791014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.960 [2024-11-19 06:39:40.791030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:48.960 [2024-11-19 06:39:40.791042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:15:48.960 [2024-11-19 06:39:40.791050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.960 [2024-11-19 06:39:40.835063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.960 [2024-11-19 06:39:40.835092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:48.960 [2024-11-19 06:39:40.835104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.960 [2024-11-19 06:39:40.835112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.960 [2024-11-19 06:39:40.835175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.960 [2024-11-19 06:39:40.835184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:48.960 [2024-11-19 06:39:40.835194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.960 [2024-11-19 06:39:40.835201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.960 [2024-11-19 06:39:40.835295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.960 [2024-11-19 06:39:40.835310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:48.960 [2024-11-19 06:39:40.835323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.960 [2024-11-19 06:39:40.835331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.960 [2024-11-19 06:39:40.835362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.961 [2024-11-19 06:39:40.835371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:48.961 [2024-11-19 06:39:40.835381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.961 [2024-11-19 06:39:40.835388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.220 [2024-11-19 06:39:40.902239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.220 [2024-11-19 06:39:40.902366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:49.220 [2024-11-19 06:39:40.902381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.220 [2024-11-19 06:39:40.902388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.220 [2024-11-19 06:39:40.950373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.220 [2024-11-19 06:39:40.950398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:49.220 [2024-11-19 06:39:40.950408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.220 [2024-11-19 06:39:40.950423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.220 [2024-11-19 06:39:40.950486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.220 [2024-11-19 06:39:40.950498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:49.220 [2024-11-19 06:39:40.950506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.220 [2024-11-19 06:39:40.950516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.220 [2024-11-19 06:39:40.950592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.220 [2024-11-19 06:39:40.950603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:49.220 [2024-11-19 06:39:40.950612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.220 [2024-11-19 06:39:40.950618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.220 [2024-11-19 06:39:40.950703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.220 [2024-11-19 06:39:40.950715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:49.220 [2024-11-19 06:39:40.950723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.220 [2024-11-19 06:39:40.950729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.220 [2024-11-19 06:39:40.950770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.220 [2024-11-19 06:39:40.950777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:49.220 [2024-11-19 06:39:40.950785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.220 [2024-11-19 06:39:40.950792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.220 [2024-11-19 06:39:40.950831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.220 [2024-11-19 06:39:40.950839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:49.220 [2024-11-19 06:39:40.950846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.220 [2024-11-19 06:39:40.950853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.220 [2024-11-19 06:39:40.950903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.220 [2024-11-19 06:39:40.950914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:49.220 [2024-11-19 06:39:40.950933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.220 [2024-11-19 06:39:40.950939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.220 [2024-11-19 06:39:40.951079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 323.677 ms, result 0 00:15:49.220 true 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72418 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 72418 ']' 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 72418 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72418 00:15:49.220 killing process with pid 72418 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72418' 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 72418 00:15:49.220 06:39:40 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 72418 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:55.786 06:39:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:55.786 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:55.786 fio-3.35 00:15:55.786 Starting 1 thread 00:16:01.070 00:16:01.070 test: (groupid=0, jobs=1): err= 0: pid=72593: Tue Nov 19 06:39:52 2024 00:16:01.070 read: IOPS=887, BW=58.9MiB/s (61.8MB/s)(255MiB/4318msec) 00:16:01.070 slat (nsec): min=4043, max=21691, avg=5483.67, stdev=1813.06 00:16:01.070 clat (usec): min=294, max=1269, avg=507.47, stdev=135.33 00:16:01.070 lat (usec): min=299, max=1275, avg=512.96, stdev=135.50 00:16:01.070 clat percentiles (usec): 00:16:01.070 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 416], 00:16:01.070 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 486], 60.00th=[ 494], 00:16:01.070 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 619], 95.00th=[ 840], 00:16:01.070 | 99.00th=[ 938], 99.50th=[ 1037], 99.90th=[ 1123], 99.95th=[ 1188], 00:16:01.070 | 99.99th=[ 1270] 00:16:01.070 write: IOPS=893, BW=59.4MiB/s (62.2MB/s)(256MiB/4314msec); 0 zone resets 00:16:01.070 slat (nsec): min=14505, max=47452, avg=20334.05, stdev=3756.45 00:16:01.070 clat (usec): min=314, max=1771, avg=578.77, stdev=160.23 00:16:01.070 lat (usec): min=333, max=1795, avg=599.10, stdev=159.48 00:16:01.070 clat percentiles (usec): 00:16:01.070 | 1.00th=[ 322], 5.00th=[ 338], 10.00th=[ 392], 20.00th=[ 502], 00:16:01.070 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 570], 60.00th=[ 578], 00:16:01.070 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 914], 00:16:01.070 | 99.00th=[ 1156], 99.50th=[ 1516], 99.90th=[ 1745], 99.95th=[ 1762], 00:16:01.070 | 99.99th=[ 1778] 00:16:01.070 bw ( KiB/s): min=54589, max=70312, per=97.12%, avg=59030.62, stdev=4772.59, samples=8 00:16:01.070 iops : min= 802, max= 1034, avg=868.00, stdev=70.29, samples=8 00:16:01.070 lat (usec) : 500=40.81%, 750=50.66%, 1000=7.45% 00:16:01.070 lat (msec) : 2=1.08% 00:16:01.070 cpu : usr=99.28%, sys=0.05%, ctx=8, majf=0, minf=1169 00:16:01.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.070 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.070 00:16:01.070 Run status group 0 (all jobs): 00:16:01.070 READ: bw=58.9MiB/s (61.8MB/s), 58.9MiB/s-58.9MiB/s (61.8MB/s-61.8MB/s), io=255MiB (267MB), run=4318-4318msec 00:16:01.070 WRITE: bw=59.4MiB/s (62.2MB/s), 59.4MiB/s-59.4MiB/s (62.2MB/s-62.2MB/s), io=256MiB (269MB), run=4314-4314msec 00:16:02.983 ----------------------------------------------------- 00:16:02.983 Suppressions used: 00:16:02.983 count bytes template 00:16:02.983 1 5 /usr/src/fio/parse.c 00:16:02.983 1 8 libtcmalloc_minimal.so 00:16:02.983 1 904 libcrypto.so 00:16:02.983 ----------------------------------------------------- 00:16:02.983 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:02.983 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:02.984 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:02.984 06:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:02.984 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:02.984 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:02.984 fio-3.35 00:16:02.984 Starting 2 threads 00:16:35.093 00:16:35.093 first_half: (groupid=0, jobs=1): err= 0: pid=72700: Tue Nov 19 06:40:24 2024 00:16:35.093 read: IOPS=2291, BW=9168KiB/s (9388kB/s)(255MiB/28494msec) 00:16:35.093 slat (usec): min=2, max=791, avg= 4.82, stdev= 3.82 00:16:35.093 clat (usec): min=868, max=415364, avg=43548.10, stdev=30969.11 00:16:35.093 lat (usec): min=872, max=415371, avg=43552.92, stdev=30969.31 00:16:35.093 clat percentiles (msec): 00:16:35.093 | 1.00th=[ 21], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:16:35.093 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 39], 00:16:35.093 | 70.00th=[ 41], 80.00th=[ 45], 90.00th=[ 53], 95.00th=[ 80], 00:16:35.093 | 99.00th=[ 220], 99.50th=[ 268], 99.90th=[ 321], 99.95th=[ 342], 00:16:35.093 | 99.99th=[ 401] 00:16:35.093 write: IOPS=2755, BW=10.8MiB/s (11.3MB/s)(256MiB/23785msec); 0 zone resets 00:16:35.093 slat (usec): min=3, max=2802, avg= 6.27, stdev=15.95 00:16:35.093 clat (usec): min=516, max=102332, avg=12234.14, stdev=18185.16 00:16:35.093 lat (usec): min=520, max=102336, avg=12240.41, stdev=18185.18 00:16:35.093 clat percentiles (usec): 00:16:35.093 | 1.00th=[ 1037], 5.00th=[ 1418], 10.00th=[ 1680], 20.00th=[ 2474], 00:16:35.093 | 30.00th=[ 4359], 40.00th=[ 5669], 50.00th=[ 7046], 60.00th=[ 8848], 00:16:35.093 | 70.00th=[ 11207], 80.00th=[ 14353], 90.00th=[ 20579], 95.00th=[ 52167], 00:16:35.093 | 99.00th=[ 91751], 99.50th=[ 93848], 99.90th=[ 98042], 99.95th=[ 99091], 00:16:35.093 | 99.99th=[100140] 00:16:35.093 bw ( KiB/s): min= 264, max=39520, per=100.00%, avg=20971.52, stdev=11161.33, samples=25 00:16:35.093 iops : min= 66, max= 9880, avg=5242.88, stdev=2790.33, samples=25 00:16:35.093 lat (usec) : 750=0.05%, 1000=0.36% 00:16:35.093 lat (msec) : 2=7.36%, 4=6.62%, 10=18.89%, 20=12.02%, 50=45.55% 00:16:35.093 lat (msec) : 100=7.40%, 250=1.44%, 500=0.32% 00:16:35.093 cpu : usr=98.60%, sys=0.39%, ctx=104, majf=0, minf=5573 00:16:35.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:35.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.093 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.093 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.093 second_half: (groupid=0, jobs=1): err= 0: pid=72701: Tue Nov 19 06:40:24 2024 00:16:35.093 read: IOPS=2266, BW=9066KiB/s (9284kB/s)(255MiB/28818msec) 00:16:35.093 slat (nsec): min=2919, max=54582, avg=4675.65, stdev=1524.60 00:16:35.093 clat (usec): min=1129, max=418235, avg=42792.01, stdev=32905.53 00:16:35.093 lat (usec): min=1132, max=418240, avg=42796.68, stdev=32905.61 00:16:35.093 clat percentiles (msec): 00:16:35.093 | 1.00th=[ 15], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:16:35.093 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 39], 00:16:35.093 | 70.00th=[ 40], 80.00th=[ 43], 90.00th=[ 51], 95.00th=[ 63], 00:16:35.093 | 99.00th=[ 245], 99.50th=[ 279], 99.90th=[ 321], 99.95th=[ 334], 00:16:35.093 | 99.99th=[ 414] 00:16:35.093 write: IOPS=2515, BW=9.83MiB/s (10.3MB/s)(256MiB/26050msec); 0 zone resets 00:16:35.093 slat (usec): min=3, max=1222, avg= 6.25, stdev= 9.98 00:16:35.093 clat (usec): min=427, max=102458, avg=13602.31, stdev=20219.05 00:16:35.093 lat (usec): min=431, max=102463, avg=13608.55, stdev=20219.15 00:16:35.093 clat percentiles (usec): 00:16:35.093 | 1.00th=[ 1012], 5.00th=[ 1352], 10.00th=[ 1549], 20.00th=[ 1958], 00:16:35.093 | 30.00th=[ 2933], 40.00th=[ 4817], 50.00th=[ 6980], 60.00th=[ 9241], 00:16:35.093 | 70.00th=[ 11731], 80.00th=[ 16057], 90.00th=[ 35390], 95.00th=[ 73925], 00:16:35.093 | 99.00th=[ 91751], 99.50th=[ 93848], 99.90th=[ 99091], 99.95th=[100140], 00:16:35.093 | 99.99th=[101188] 00:16:35.093 bw ( KiB/s): min= 288, max=52296, per=86.83%, avg=17476.27, stdev=13250.10, samples=30 00:16:35.093 iops : min= 72, max=13074, avg=4369.07, stdev=3312.52, samples=30 00:16:35.093 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.42% 00:16:35.093 lat (msec) : 2=9.86%, 4=8.17%, 10=13.42%, 20=12.22%, 50=47.20% 00:16:35.093 lat (msec) : 100=6.84%, 250=1.40%, 500=0.42% 00:16:35.093 cpu : usr=99.24%, sys=0.15%, ctx=37, majf=0, minf=5548 00:16:35.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:35.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.093 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.093 issued rwts: total=65318,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.093 00:16:35.093 Run status group 0 (all jobs): 00:16:35.093 READ: bw=17.7MiB/s (18.6MB/s), 9066KiB/s-9168KiB/s (9284kB/s-9388kB/s), io=510MiB (535MB), run=28494-28818msec 00:16:35.093 WRITE: bw=19.7MiB/s (20.6MB/s), 9.83MiB/s-10.8MiB/s (10.3MB/s-11.3MB/s), io=512MiB (537MB), run=23785-26050msec 00:16:35.093 ----------------------------------------------------- 00:16:35.093 Suppressions used: 00:16:35.093 count bytes template 00:16:35.093 2 10 /usr/src/fio/parse.c 00:16:35.093 5 480 /usr/src/fio/iolog.c 00:16:35.093 1 8 libtcmalloc_minimal.so 00:16:35.093 1 904 libcrypto.so 00:16:35.093 ----------------------------------------------------- 00:16:35.093 00:16:35.093 06:40:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:35.093 06:40:26 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:35.093 06:40:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:35.355 06:40:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:35.355 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:35.355 fio-3.35 00:16:35.355 Starting 1 thread 00:16:53.601 00:16:53.601 test: (groupid=0, jobs=1): err= 0: pid=73070: Tue Nov 19 06:40:44 2024 00:16:53.601 read: IOPS=6497, BW=25.4MiB/s (26.6MB/s)(255MiB/10035msec) 00:16:53.601 slat (nsec): min=2920, max=22884, avg=4472.58, stdev=1151.56 00:16:53.601 clat (usec): min=472, max=45890, avg=19691.85, stdev=2812.70 00:16:53.601 lat (usec): min=478, max=45897, avg=19696.32, stdev=2812.90 00:16:53.601 clat percentiles (usec): 00:16:53.601 | 1.00th=[14222], 5.00th=[14615], 10.00th=[15401], 20.00th=[17433], 00:16:53.601 | 30.00th=[18744], 40.00th=[19530], 50.00th=[20055], 60.00th=[20579], 00:16:53.601 | 70.00th=[21103], 80.00th=[21627], 90.00th=[22676], 95.00th=[23725], 00:16:53.601 | 99.00th=[26608], 99.50th=[27657], 99.90th=[32113], 99.95th=[38011], 00:16:53.601 | 99.99th=[45351] 00:16:53.601 write: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(256MiB/5550msec); 0 zone resets 00:16:53.601 slat (usec): min=3, max=239, avg= 5.71, stdev= 3.13 00:16:53.601 clat (usec): min=467, max=49640, avg=10792.99, stdev=10918.79 00:16:53.601 lat (usec): min=471, max=49645, avg=10798.70, stdev=10919.06 00:16:53.601 clat percentiles (usec): 00:16:53.601 | 1.00th=[ 644], 5.00th=[ 807], 10.00th=[ 963], 20.00th=[ 1156], 00:16:53.601 | 30.00th=[ 1319], 40.00th=[ 1745], 50.00th=[ 6587], 60.00th=[11994], 00:16:53.601 | 70.00th=[15664], 80.00th=[18744], 90.00th=[30540], 95.00th=[32375], 00:16:53.601 | 99.00th=[35914], 99.50th=[37487], 99.90th=[42206], 99.95th=[43779], 00:16:53.601 | 99.99th=[46400] 00:16:53.601 bw ( KiB/s): min= 3696, max=65824, per=92.50%, avg=43690.67, stdev=17329.93, samples=12 00:16:53.601 iops : min= 924, max=16456, avg=10922.67, stdev=4332.48, samples=12 00:16:53.601 lat (usec) : 500=0.01%, 750=1.74%, 1000=3.96% 00:16:53.601 lat (msec) : 2=14.73%, 4=0.72%, 10=7.09%, 20=37.85%, 50=33.92% 00:16:53.601 cpu : usr=99.10%, sys=0.14%, ctx=25, majf=0, minf=5565 00:16:53.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:53.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.601 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.601 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.601 00:16:53.601 Run status group 0 (all jobs): 00:16:53.601 READ: bw=25.4MiB/s (26.6MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.6MB/s), io=255MiB (267MB), run=10035-10035msec 00:16:53.601 WRITE: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=256MiB (268MB), run=5550-5550msec 00:16:53.601 ----------------------------------------------------- 00:16:53.601 Suppressions used: 00:16:53.601 count bytes template 00:16:53.601 1 5 /usr/src/fio/parse.c 00:16:53.601 2 192 /usr/src/fio/iolog.c 00:16:53.601 1 8 libtcmalloc_minimal.so 00:16:53.601 1 904 libcrypto.so 00:16:53.601 ----------------------------------------------------- 00:16:53.601 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:53.863 Remove shared memory files 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57183 /dev/shm/spdk_tgt_trace.pid71322 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:53.863 ************************************ 00:16:53.863 END TEST ftl_fio_basic 00:16:53.863 ************************************ 00:16:53.863 00:16:53.863 real 1m11.725s 00:16:53.863 user 2m28.457s 00:16:53.863 sys 0m9.091s 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.863 06:40:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:53.863 06:40:45 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:53.863 06:40:45 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:53.863 06:40:45 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.863 06:40:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:53.863 ************************************ 00:16:53.863 START TEST ftl_bdevperf 00:16:53.863 ************************************ 00:16:53.863 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:53.863 * Looking for test storage... 00:16:53.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:53.863 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:53.863 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:53.863 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:54.124 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:54.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.125 --rc genhtml_branch_coverage=1 00:16:54.125 --rc genhtml_function_coverage=1 00:16:54.125 --rc genhtml_legend=1 00:16:54.125 --rc geninfo_all_blocks=1 00:16:54.125 --rc geninfo_unexecuted_blocks=1 00:16:54.125 00:16:54.125 ' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:54.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.125 --rc genhtml_branch_coverage=1 00:16:54.125 --rc genhtml_function_coverage=1 00:16:54.125 --rc genhtml_legend=1 00:16:54.125 --rc geninfo_all_blocks=1 00:16:54.125 --rc geninfo_unexecuted_blocks=1 00:16:54.125 00:16:54.125 ' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:54.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.125 --rc genhtml_branch_coverage=1 00:16:54.125 --rc genhtml_function_coverage=1 00:16:54.125 --rc genhtml_legend=1 00:16:54.125 --rc geninfo_all_blocks=1 00:16:54.125 --rc geninfo_unexecuted_blocks=1 00:16:54.125 00:16:54.125 ' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:54.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.125 --rc genhtml_branch_coverage=1 00:16:54.125 --rc genhtml_function_coverage=1 00:16:54.125 --rc genhtml_legend=1 00:16:54.125 --rc geninfo_all_blocks=1 00:16:54.125 --rc geninfo_unexecuted_blocks=1 00:16:54.125 00:16:54.125 ' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:54.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73337 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73337 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 73337 ']' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.125 06:40:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:54.125 [2024-11-19 06:40:45.910547] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:54.125 [2024-11-19 06:40:45.910939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73337 ] 00:16:54.387 [2024-11-19 06:40:46.067903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.387 [2024-11-19 06:40:46.188287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.961 06:40:46 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.961 06:40:46 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:16:54.961 06:40:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:54.961 06:40:46 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:54.961 06:40:46 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:54.961 06:40:46 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:54.961 06:40:46 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:54.961 06:40:46 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:55.222 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:55.222 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:55.222 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:55.222 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:55.222 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:55.222 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:55.222 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:55.222 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:55.483 { 00:16:55.483 "name": "nvme0n1", 00:16:55.483 "aliases": [ 00:16:55.483 "8f8e0979-b286-4410-bab6-228570854f33" 00:16:55.483 ], 00:16:55.483 "product_name": "NVMe disk", 00:16:55.483 "block_size": 4096, 00:16:55.483 "num_blocks": 1310720, 00:16:55.483 "uuid": "8f8e0979-b286-4410-bab6-228570854f33", 00:16:55.483 "numa_id": -1, 00:16:55.483 "assigned_rate_limits": { 00:16:55.483 "rw_ios_per_sec": 0, 00:16:55.483 "rw_mbytes_per_sec": 0, 00:16:55.483 "r_mbytes_per_sec": 0, 00:16:55.483 "w_mbytes_per_sec": 0 00:16:55.483 }, 00:16:55.483 "claimed": true, 00:16:55.483 "claim_type": "read_many_write_one", 00:16:55.483 "zoned": false, 00:16:55.483 "supported_io_types": { 00:16:55.483 "read": true, 00:16:55.483 "write": true, 00:16:55.483 "unmap": true, 00:16:55.483 "flush": true, 00:16:55.483 "reset": true, 00:16:55.483 "nvme_admin": true, 00:16:55.483 "nvme_io": true, 00:16:55.483 "nvme_io_md": false, 00:16:55.483 "write_zeroes": true, 00:16:55.483 "zcopy": false, 00:16:55.483 "get_zone_info": false, 00:16:55.483 "zone_management": false, 00:16:55.483 "zone_append": false, 00:16:55.483 "compare": true, 00:16:55.483 "compare_and_write": false, 00:16:55.483 "abort": true, 00:16:55.483 "seek_hole": false, 00:16:55.483 "seek_data": false, 00:16:55.483 "copy": true, 00:16:55.483 "nvme_iov_md": false 00:16:55.483 }, 00:16:55.483 "driver_specific": { 00:16:55.483 "nvme": [ 00:16:55.483 { 00:16:55.483 "pci_address": "0000:00:11.0", 00:16:55.483 "trid": { 00:16:55.483 "trtype": "PCIe", 00:16:55.483 "traddr": "0000:00:11.0" 00:16:55.483 }, 00:16:55.483 "ctrlr_data": { 00:16:55.483 "cntlid": 0, 00:16:55.483 "vendor_id": "0x1b36", 00:16:55.483 "model_number": "QEMU NVMe Ctrl", 00:16:55.483 "serial_number": "12341", 00:16:55.483 "firmware_revision": "8.0.0", 00:16:55.483 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:55.483 "oacs": { 00:16:55.483 "security": 0, 00:16:55.483 "format": 1, 00:16:55.483 "firmware": 0, 00:16:55.483 "ns_manage": 1 00:16:55.483 }, 00:16:55.483 "multi_ctrlr": false, 00:16:55.483 "ana_reporting": false 00:16:55.483 }, 00:16:55.483 "vs": { 00:16:55.483 "nvme_version": "1.4" 00:16:55.483 }, 00:16:55.483 "ns_data": { 00:16:55.483 "id": 1, 00:16:55.483 "can_share": false 00:16:55.483 } 00:16:55.483 } 00:16:55.483 ], 00:16:55.483 "mp_policy": "active_passive" 00:16:55.483 } 00:16:55.483 } 00:16:55.483 ]' 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:55.483 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:55.743 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=aeec755d-b30b-4e22-ab1a-e36e71982a0b 00:16:55.743 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:55.743 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aeec755d-b30b-4e22-ab1a-e36e71982a0b 00:16:56.001 06:40:47 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:56.259 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=17262938-446d-4fa8-8eab-5374272bba0a 00:16:56.259 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 17262938-446d-4fa8-8eab-5374272bba0a 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:56.517 { 00:16:56.517 "name": "5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d", 00:16:56.517 "aliases": [ 00:16:56.517 "lvs/nvme0n1p0" 00:16:56.517 ], 00:16:56.517 "product_name": "Logical Volume", 00:16:56.517 "block_size": 4096, 00:16:56.517 "num_blocks": 26476544, 00:16:56.517 "uuid": "5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d", 00:16:56.517 "assigned_rate_limits": { 00:16:56.517 "rw_ios_per_sec": 0, 00:16:56.517 "rw_mbytes_per_sec": 0, 00:16:56.517 "r_mbytes_per_sec": 0, 00:16:56.517 "w_mbytes_per_sec": 0 00:16:56.517 }, 00:16:56.517 "claimed": false, 00:16:56.517 "zoned": false, 00:16:56.517 "supported_io_types": { 00:16:56.517 "read": true, 00:16:56.517 "write": true, 00:16:56.517 "unmap": true, 00:16:56.517 "flush": false, 00:16:56.517 "reset": true, 00:16:56.517 "nvme_admin": false, 00:16:56.517 "nvme_io": false, 00:16:56.517 "nvme_io_md": false, 00:16:56.517 "write_zeroes": true, 00:16:56.517 "zcopy": false, 00:16:56.517 "get_zone_info": false, 00:16:56.517 "zone_management": false, 00:16:56.517 "zone_append": false, 00:16:56.517 "compare": false, 00:16:56.517 "compare_and_write": false, 00:16:56.517 "abort": false, 00:16:56.517 "seek_hole": true, 00:16:56.517 "seek_data": true, 00:16:56.517 "copy": false, 00:16:56.517 "nvme_iov_md": false 00:16:56.517 }, 00:16:56.517 "driver_specific": { 00:16:56.517 "lvol": { 00:16:56.517 "lvol_store_uuid": "17262938-446d-4fa8-8eab-5374272bba0a", 00:16:56.517 "base_bdev": "nvme0n1", 00:16:56.517 "thin_provision": true, 00:16:56.517 "num_allocated_clusters": 0, 00:16:56.517 "snapshot": false, 00:16:56.517 "clone": false, 00:16:56.517 "esnap_clone": false 00:16:56.517 } 00:16:56.517 } 00:16:56.517 } 00:16:56.517 ]' 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:56.517 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:56.775 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:56.775 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:56.775 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:56.775 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:56.775 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:56.775 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:57.032 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:57.032 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:57.032 06:40:48 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:57.032 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:57.032 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:57.032 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:57.032 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:57.032 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:57.032 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:57.032 { 00:16:57.032 "name": "5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d", 00:16:57.032 "aliases": [ 00:16:57.032 "lvs/nvme0n1p0" 00:16:57.032 ], 00:16:57.032 "product_name": "Logical Volume", 00:16:57.032 "block_size": 4096, 00:16:57.032 "num_blocks": 26476544, 00:16:57.032 "uuid": "5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d", 00:16:57.032 "assigned_rate_limits": { 00:16:57.032 "rw_ios_per_sec": 0, 00:16:57.032 "rw_mbytes_per_sec": 0, 00:16:57.032 "r_mbytes_per_sec": 0, 00:16:57.032 "w_mbytes_per_sec": 0 00:16:57.032 }, 00:16:57.032 "claimed": false, 00:16:57.032 "zoned": false, 00:16:57.033 "supported_io_types": { 00:16:57.033 "read": true, 00:16:57.033 "write": true, 00:16:57.033 "unmap": true, 00:16:57.033 "flush": false, 00:16:57.033 "reset": true, 00:16:57.033 "nvme_admin": false, 00:16:57.033 "nvme_io": false, 00:16:57.033 "nvme_io_md": false, 00:16:57.033 "write_zeroes": true, 00:16:57.033 "zcopy": false, 00:16:57.033 "get_zone_info": false, 00:16:57.033 "zone_management": false, 00:16:57.033 "zone_append": false, 00:16:57.033 "compare": false, 00:16:57.033 "compare_and_write": false, 00:16:57.033 "abort": false, 00:16:57.033 "seek_hole": true, 00:16:57.033 "seek_data": true, 00:16:57.033 "copy": false, 00:16:57.033 "nvme_iov_md": false 00:16:57.033 }, 00:16:57.033 "driver_specific": { 00:16:57.033 "lvol": { 00:16:57.033 "lvol_store_uuid": "17262938-446d-4fa8-8eab-5374272bba0a", 00:16:57.033 "base_bdev": "nvme0n1", 00:16:57.033 "thin_provision": true, 00:16:57.033 "num_allocated_clusters": 0, 00:16:57.033 "snapshot": false, 00:16:57.033 "clone": false, 00:16:57.033 "esnap_clone": false 00:16:57.033 } 00:16:57.033 } 00:16:57.033 } 00:16:57.033 ]' 00:16:57.033 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:57.289 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:57.289 06:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:57.289 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d 00:16:57.545 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:57.545 { 00:16:57.545 "name": "5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d", 00:16:57.545 "aliases": [ 00:16:57.545 "lvs/nvme0n1p0" 00:16:57.545 ], 00:16:57.545 "product_name": "Logical Volume", 00:16:57.545 "block_size": 4096, 00:16:57.545 "num_blocks": 26476544, 00:16:57.545 "uuid": "5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d", 00:16:57.545 "assigned_rate_limits": { 00:16:57.545 "rw_ios_per_sec": 0, 00:16:57.545 "rw_mbytes_per_sec": 0, 00:16:57.545 "r_mbytes_per_sec": 0, 00:16:57.545 "w_mbytes_per_sec": 0 00:16:57.545 }, 00:16:57.545 "claimed": false, 00:16:57.545 "zoned": false, 00:16:57.545 "supported_io_types": { 00:16:57.545 "read": true, 00:16:57.545 "write": true, 00:16:57.545 "unmap": true, 00:16:57.545 "flush": false, 00:16:57.545 "reset": true, 00:16:57.545 "nvme_admin": false, 00:16:57.545 "nvme_io": false, 00:16:57.545 "nvme_io_md": false, 00:16:57.545 "write_zeroes": true, 00:16:57.545 "zcopy": false, 00:16:57.545 "get_zone_info": false, 00:16:57.545 "zone_management": false, 00:16:57.545 "zone_append": false, 00:16:57.545 "compare": false, 00:16:57.545 "compare_and_write": false, 00:16:57.545 "abort": false, 00:16:57.545 "seek_hole": true, 00:16:57.545 "seek_data": true, 00:16:57.545 "copy": false, 00:16:57.545 "nvme_iov_md": false 00:16:57.545 }, 00:16:57.545 "driver_specific": { 00:16:57.545 "lvol": { 00:16:57.545 "lvol_store_uuid": "17262938-446d-4fa8-8eab-5374272bba0a", 00:16:57.545 "base_bdev": "nvme0n1", 00:16:57.545 "thin_provision": true, 00:16:57.545 "num_allocated_clusters": 0, 00:16:57.545 "snapshot": false, 00:16:57.545 "clone": false, 00:16:57.545 "esnap_clone": false 00:16:57.545 } 00:16:57.545 } 00:16:57.545 } 00:16:57.545 ]' 00:16:57.545 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:57.545 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:57.545 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:57.803 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:57.803 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:57.803 06:40:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:57.803 06:40:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:57.803 06:40:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5d3dcea2-ca2f-4ce5-9fd3-3a0aab7e676d -c nvc0n1p0 --l2p_dram_limit 20 00:16:57.803 [2024-11-19 06:40:49.667781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.803 [2024-11-19 06:40:49.667905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:57.803 [2024-11-19 06:40:49.667933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:57.803 [2024-11-19 06:40:49.667943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.667989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.668001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:57.804 [2024-11-19 06:40:49.668009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:57.804 [2024-11-19 06:40:49.668016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.668031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:57.804 [2024-11-19 06:40:49.668588] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:57.804 [2024-11-19 06:40:49.668603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.668611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:57.804 [2024-11-19 06:40:49.668617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:16:57.804 [2024-11-19 06:40:49.668625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.668710] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f4ef665c-6ddb-4ac2-a147-b8b6f502f741 00:16:57.804 [2024-11-19 06:40:49.669639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.669666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:57.804 [2024-11-19 06:40:49.669675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:57.804 [2024-11-19 06:40:49.669683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.674306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.674402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:57.804 [2024-11-19 06:40:49.674417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.595 ms 00:16:57.804 [2024-11-19 06:40:49.674423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.674490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.674497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:57.804 [2024-11-19 06:40:49.674507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:57.804 [2024-11-19 06:40:49.674513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.674548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.674555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:57.804 [2024-11-19 06:40:49.674563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:57.804 [2024-11-19 06:40:49.674568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.674584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:57.804 [2024-11-19 06:40:49.677432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.677521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:57.804 [2024-11-19 06:40:49.677533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:16:57.804 [2024-11-19 06:40:49.677540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.677564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.677572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:57.804 [2024-11-19 06:40:49.677578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:57.804 [2024-11-19 06:40:49.677585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.677601] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:57.804 [2024-11-19 06:40:49.677708] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:57.804 [2024-11-19 06:40:49.677717] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:57.804 [2024-11-19 06:40:49.677727] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:57.804 [2024-11-19 06:40:49.677735] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:57.804 [2024-11-19 06:40:49.677743] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:57.804 [2024-11-19 06:40:49.677749] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:57.804 [2024-11-19 06:40:49.677756] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:57.804 [2024-11-19 06:40:49.677762] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:57.804 [2024-11-19 06:40:49.677768] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:57.804 [2024-11-19 06:40:49.677774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.677785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:57.804 [2024-11-19 06:40:49.677790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:16:57.804 [2024-11-19 06:40:49.677797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.677858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.804 [2024-11-19 06:40:49.677866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:57.804 [2024-11-19 06:40:49.677872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:57.804 [2024-11-19 06:40:49.677880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.804 [2024-11-19 06:40:49.677961] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:57.804 [2024-11-19 06:40:49.677970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:57.804 [2024-11-19 06:40:49.677977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:57.804 [2024-11-19 06:40:49.677985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.804 [2024-11-19 06:40:49.677990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:57.804 [2024-11-19 06:40:49.677997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:57.804 [2024-11-19 06:40:49.678008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:57.804 [2024-11-19 06:40:49.678013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:57.804 [2024-11-19 06:40:49.678024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:57.804 [2024-11-19 06:40:49.678031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:57.804 [2024-11-19 06:40:49.678037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:57.804 [2024-11-19 06:40:49.678048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:57.804 [2024-11-19 06:40:49.678053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:57.804 [2024-11-19 06:40:49.678062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:57.804 [2024-11-19 06:40:49.678074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:57.804 [2024-11-19 06:40:49.678082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:57.804 [2024-11-19 06:40:49.678097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.804 [2024-11-19 06:40:49.678108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:57.804 [2024-11-19 06:40:49.678115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.804 [2024-11-19 06:40:49.678126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:57.804 [2024-11-19 06:40:49.678131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.804 [2024-11-19 06:40:49.678141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:57.804 [2024-11-19 06:40:49.678148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.804 [2024-11-19 06:40:49.678160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:57.804 [2024-11-19 06:40:49.678166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:57.804 [2024-11-19 06:40:49.678177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:57.804 [2024-11-19 06:40:49.678182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:57.804 [2024-11-19 06:40:49.678187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:57.804 [2024-11-19 06:40:49.678193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:57.804 [2024-11-19 06:40:49.678198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:57.804 [2024-11-19 06:40:49.678204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:57.804 [2024-11-19 06:40:49.678215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:57.804 [2024-11-19 06:40:49.678220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678226] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:57.804 [2024-11-19 06:40:49.678232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:57.804 [2024-11-19 06:40:49.678238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:57.804 [2024-11-19 06:40:49.678244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.804 [2024-11-19 06:40:49.678252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:57.804 [2024-11-19 06:40:49.678257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:57.804 [2024-11-19 06:40:49.678263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:57.805 [2024-11-19 06:40:49.678267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:57.805 [2024-11-19 06:40:49.678276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:57.805 [2024-11-19 06:40:49.678282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:57.805 [2024-11-19 06:40:49.678290] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:57.805 [2024-11-19 06:40:49.678297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:57.805 [2024-11-19 06:40:49.678305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:57.805 [2024-11-19 06:40:49.678310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:57.805 [2024-11-19 06:40:49.678317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:57.805 [2024-11-19 06:40:49.678322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:57.805 [2024-11-19 06:40:49.678329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:57.805 [2024-11-19 06:40:49.678334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:57.805 [2024-11-19 06:40:49.678340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:57.805 [2024-11-19 06:40:49.678345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:57.805 [2024-11-19 06:40:49.678353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:57.805 [2024-11-19 06:40:49.678359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:57.805 [2024-11-19 06:40:49.678366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:57.805 [2024-11-19 06:40:49.678371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:57.805 [2024-11-19 06:40:49.678379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:57.805 [2024-11-19 06:40:49.678385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:57.805 [2024-11-19 06:40:49.678391] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:57.805 [2024-11-19 06:40:49.678397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:57.805 [2024-11-19 06:40:49.678406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:57.805 [2024-11-19 06:40:49.678412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:57.805 [2024-11-19 06:40:49.678424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:57.805 [2024-11-19 06:40:49.678429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:57.805 [2024-11-19 06:40:49.678436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.805 [2024-11-19 06:40:49.678443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:57.805 [2024-11-19 06:40:49.678450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:16:57.805 [2024-11-19 06:40:49.678455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.805 [2024-11-19 06:40:49.678492] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:57.805 [2024-11-19 06:40:49.678501] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:02.028 [2024-11-19 06:40:53.780545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.780591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:02.028 [2024-11-19 06:40:53.780607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4102.035 ms 00:17:02.028 [2024-11-19 06:40:53.780614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.801102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.801137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.028 [2024-11-19 06:40:53.801148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.324 ms 00:17:02.028 [2024-11-19 06:40:53.801154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.801238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.801246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:02.028 [2024-11-19 06:40:53.801255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:02.028 [2024-11-19 06:40:53.801261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.841614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.841649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.028 [2024-11-19 06:40:53.841662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.230 ms 00:17:02.028 [2024-11-19 06:40:53.841668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.841698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.841707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.028 [2024-11-19 06:40:53.841715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:02.028 [2024-11-19 06:40:53.841720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.842063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.842076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.028 [2024-11-19 06:40:53.842085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:17:02.028 [2024-11-19 06:40:53.842092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.842194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.842201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.028 [2024-11-19 06:40:53.842210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:02.028 [2024-11-19 06:40:53.842216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.852753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.852779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.028 [2024-11-19 06:40:53.852788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.523 ms 00:17:02.028 [2024-11-19 06:40:53.852795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.861724] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:02.028 [2024-11-19 06:40:53.866159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.866184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:02.028 [2024-11-19 06:40:53.866192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.315 ms 00:17:02.028 [2024-11-19 06:40:53.866200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.929799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.028 [2024-11-19 06:40:53.929839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:02.028 [2024-11-19 06:40:53.929850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.582 ms 00:17:02.028 [2024-11-19 06:40:53.929858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.028 [2024-11-19 06:40:53.930008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.029 [2024-11-19 06:40:53.930021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:02.029 [2024-11-19 06:40:53.930028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:17:02.029 [2024-11-19 06:40:53.930035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.029 [2024-11-19 06:40:53.948690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.029 [2024-11-19 06:40:53.948834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:02.029 [2024-11-19 06:40:53.948847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.621 ms 00:17:02.029 [2024-11-19 06:40:53.948856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-19 06:40:53.966003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-19 06:40:53.966031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:02.287 [2024-11-19 06:40:53.966040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.123 ms 00:17:02.287 [2024-11-19 06:40:53.966047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-19 06:40:53.966477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-19 06:40:53.966486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:02.287 [2024-11-19 06:40:53.966493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:17:02.287 [2024-11-19 06:40:53.966500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-19 06:40:54.025731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-19 06:40:54.025765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:02.287 [2024-11-19 06:40:54.025773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.211 ms 00:17:02.287 [2024-11-19 06:40:54.025781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-19 06:40:54.044960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-19 06:40:54.044993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:02.287 [2024-11-19 06:40:54.045002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.131 ms 00:17:02.287 [2024-11-19 06:40:54.045011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-19 06:40:54.062889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-19 06:40:54.062920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:02.287 [2024-11-19 06:40:54.062936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.851 ms 00:17:02.287 [2024-11-19 06:40:54.062943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-19 06:40:54.081806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-19 06:40:54.081839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:02.287 [2024-11-19 06:40:54.081847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.837 ms 00:17:02.287 [2024-11-19 06:40:54.081855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-19 06:40:54.081882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-19 06:40:54.081893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:02.287 [2024-11-19 06:40:54.081899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:02.287 [2024-11-19 06:40:54.081906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-19 06:40:54.081970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-19 06:40:54.081979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:02.287 [2024-11-19 06:40:54.081985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:02.287 [2024-11-19 06:40:54.081992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-19 06:40:54.082632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4414.522 ms, result 0 00:17:02.287 { 00:17:02.287 "name": "ftl0", 00:17:02.287 "uuid": "f4ef665c-6ddb-4ac2-a147-b8b6f502f741" 00:17:02.287 } 00:17:02.287 06:40:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:02.287 06:40:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:02.287 06:40:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:02.546 06:40:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:02.546 [2024-11-19 06:40:54.386887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:02.546 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:02.546 Zero copy mechanism will not be used. 00:17:02.546 Running I/O for 4 seconds... 00:17:04.856 784.00 IOPS, 52.06 MiB/s [2024-11-19T06:40:57.718Z] 948.00 IOPS, 62.95 MiB/s [2024-11-19T06:40:58.652Z] 1235.67 IOPS, 82.06 MiB/s [2024-11-19T06:40:58.652Z] 1209.75 IOPS, 80.33 MiB/s 00:17:06.723 Latency(us) 00:17:06.723 [2024-11-19T06:40:58.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.723 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:06.723 ftl0 : 4.00 1209.60 80.32 0.00 0.00 873.03 164.63 2218.14 00:17:06.723 [2024-11-19T06:40:58.652Z] =================================================================================================================== 00:17:06.723 [2024-11-19T06:40:58.652Z] Total : 1209.60 80.32 0.00 0.00 873.03 164.63 2218.14 00:17:06.723 { 00:17:06.723 "results": [ 00:17:06.723 { 00:17:06.723 "job": "ftl0", 00:17:06.723 "core_mask": "0x1", 00:17:06.723 "workload": "randwrite", 00:17:06.723 "status": "finished", 00:17:06.723 "queue_depth": 1, 00:17:06.723 "io_size": 69632, 00:17:06.723 "runtime": 4.001327, 00:17:06.723 "iops": 1209.598715626091, 00:17:06.723 "mibps": 80.32491470954511, 00:17:06.723 "io_failed": 0, 00:17:06.723 "io_timeout": 0, 00:17:06.723 "avg_latency_us": 873.0309574062303, 00:17:06.723 "min_latency_us": 164.6276923076923, 00:17:06.723 "max_latency_us": 2218.1415384615384 00:17:06.723 } 00:17:06.723 ], 00:17:06.723 "core_count": 1 00:17:06.723 } 00:17:06.723 [2024-11-19 06:40:58.394658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:06.723 06:40:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:06.723 [2024-11-19 06:40:58.499297] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:06.723 Running I/O for 4 seconds... 00:17:08.606 6413.00 IOPS, 25.05 MiB/s [2024-11-19T06:41:01.921Z] 5522.50 IOPS, 21.57 MiB/s [2024-11-19T06:41:02.864Z] 5338.67 IOPS, 20.85 MiB/s [2024-11-19T06:41:02.864Z] 5707.75 IOPS, 22.30 MiB/s 00:17:10.935 Latency(us) 00:17:10.935 [2024-11-19T06:41:02.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.935 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.935 ftl0 : 4.03 5700.14 22.27 0.00 0.00 22379.09 299.32 48597.46 00:17:10.935 [2024-11-19T06:41:02.864Z] =================================================================================================================== 00:17:10.935 [2024-11-19T06:41:02.864Z] Total : 5700.14 22.27 0.00 0.00 22379.09 0.00 48597.46 00:17:10.935 [2024-11-19 06:41:02.534202] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:10.935 { 00:17:10.935 "results": [ 00:17:10.935 { 00:17:10.935 "job": "ftl0", 00:17:10.935 "core_mask": "0x1", 00:17:10.935 "workload": "randwrite", 00:17:10.935 "status": "finished", 00:17:10.935 "queue_depth": 128, 00:17:10.935 "io_size": 4096, 00:17:10.935 "runtime": 4.027266, 00:17:10.935 "iops": 5700.1449618674305, 00:17:10.935 "mibps": 22.26619125729465, 00:17:10.935 "io_failed": 0, 00:17:10.935 "io_timeout": 0, 00:17:10.935 "avg_latency_us": 22379.090742691704, 00:17:10.935 "min_latency_us": 299.32307692307694, 00:17:10.935 "max_latency_us": 48597.46461538462 00:17:10.935 } 00:17:10.935 ], 00:17:10.935 "core_count": 1 00:17:10.935 } 00:17:10.935 06:41:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:10.935 [2024-11-19 06:41:02.633173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:10.935 Running I/O for 4 seconds... 00:17:12.823 4671.00 IOPS, 18.25 MiB/s [2024-11-19T06:41:05.698Z] 4647.00 IOPS, 18.15 MiB/s [2024-11-19T06:41:07.081Z] 4655.00 IOPS, 18.18 MiB/s [2024-11-19T06:41:07.081Z] 5015.00 IOPS, 19.59 MiB/s 00:17:15.152 Latency(us) 00:17:15.152 [2024-11-19T06:41:07.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.152 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:15.152 Verification LBA range: start 0x0 length 0x1400000 00:17:15.152 ftl0 : 4.01 5030.66 19.65 0.00 0.00 25376.12 217.40 43354.58 00:17:15.152 [2024-11-19T06:41:07.081Z] =================================================================================================================== 00:17:15.152 [2024-11-19T06:41:07.081Z] Total : 5030.66 19.65 0.00 0.00 25376.12 0.00 43354.58 00:17:15.152 { 00:17:15.152 "results": [ 00:17:15.152 { 00:17:15.152 "job": "ftl0", 00:17:15.152 "core_mask": "0x1", 00:17:15.152 "workload": "verify", 00:17:15.152 "status": "finished", 00:17:15.152 "verify_range": { 00:17:15.152 "start": 0, 00:17:15.152 "length": 20971520 00:17:15.152 }, 00:17:15.152 "queue_depth": 128, 00:17:15.152 "io_size": 4096, 00:17:15.152 "runtime": 4.011597, 00:17:15.152 "iops": 5030.664844948284, 00:17:15.152 "mibps": 19.651034550579233, 00:17:15.152 "io_failed": 0, 00:17:15.152 "io_timeout": 0, 00:17:15.152 "avg_latency_us": 25376.11993855607, 00:17:15.152 "min_latency_us": 217.40307692307692, 00:17:15.152 "max_latency_us": 43354.584615384614 00:17:15.152 } 00:17:15.152 ], 00:17:15.152 "core_count": 1 00:17:15.152 } 00:17:15.152 [2024-11-19 06:41:06.661177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:15.152 06:41:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:15.152 [2024-11-19 06:41:06.876210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.152 [2024-11-19 06:41:06.876477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:15.152 [2024-11-19 06:41:06.876665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:15.152 [2024-11-19 06:41:06.876702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.152 [2024-11-19 06:41:06.876751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:15.152 [2024-11-19 06:41:06.879815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.152 [2024-11-19 06:41:06.879994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:15.152 [2024-11-19 06:41:06.880131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:17:15.152 [2024-11-19 06:41:06.880161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.152 [2024-11-19 06:41:06.883383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.152 [2024-11-19 06:41:06.883567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:15.152 [2024-11-19 06:41:06.883639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.168 ms 00:17:15.152 [2024-11-19 06:41:06.883664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.097044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.413 [2024-11-19 06:41:07.097250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:15.413 [2024-11-19 06:41:07.097337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 213.331 ms 00:17:15.413 [2024-11-19 06:41:07.097363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.103592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.413 [2024-11-19 06:41:07.103751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:15.413 [2024-11-19 06:41:07.103815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.171 ms 00:17:15.413 [2024-11-19 06:41:07.103841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.130995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.413 [2024-11-19 06:41:07.131169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:15.413 [2024-11-19 06:41:07.131234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.044 ms 00:17:15.413 [2024-11-19 06:41:07.131258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.149063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.413 [2024-11-19 06:41:07.149233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:15.413 [2024-11-19 06:41:07.149307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.698 ms 00:17:15.413 [2024-11-19 06:41:07.149331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.149497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.413 [2024-11-19 06:41:07.149528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:15.413 [2024-11-19 06:41:07.149557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:17:15.413 [2024-11-19 06:41:07.149576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.175989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.413 [2024-11-19 06:41:07.176157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:15.413 [2024-11-19 06:41:07.176221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.377 ms 00:17:15.413 [2024-11-19 06:41:07.176244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.202236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.413 [2024-11-19 06:41:07.202408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:15.413 [2024-11-19 06:41:07.202474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.679 ms 00:17:15.413 [2024-11-19 06:41:07.202497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.227962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.413 [2024-11-19 06:41:07.228122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:15.413 [2024-11-19 06:41:07.228185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.116 ms 00:17:15.413 [2024-11-19 06:41:07.228208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.253638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.413 [2024-11-19 06:41:07.253805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:15.413 [2024-11-19 06:41:07.253872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.058 ms 00:17:15.413 [2024-11-19 06:41:07.253895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.413 [2024-11-19 06:41:07.253960] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:15.413 [2024-11-19 06:41:07.253994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:15.413 [2024-11-19 06:41:07.254030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:15.413 [2024-11-19 06:41:07.254218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:15.413 [2024-11-19 06:41:07.254256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:15.413 [2024-11-19 06:41:07.254286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:15.413 [2024-11-19 06:41:07.254320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:15.413 [2024-11-19 06:41:07.254387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:15.413 [2024-11-19 06:41:07.254421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:15.413 [2024-11-19 06:41:07.254453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.254990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:15.414 [2024-11-19 06:41:07.255491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:15.415 [2024-11-19 06:41:07.255501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:15.415 [2024-11-19 06:41:07.255509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:15.415 [2024-11-19 06:41:07.255518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:15.415 [2024-11-19 06:41:07.255536] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:15.415 [2024-11-19 06:41:07.255547] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f4ef665c-6ddb-4ac2-a147-b8b6f502f741 00:17:15.415 [2024-11-19 06:41:07.255556] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:15.415 [2024-11-19 06:41:07.255566] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:15.415 [2024-11-19 06:41:07.255576] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:15.415 [2024-11-19 06:41:07.255588] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:15.415 [2024-11-19 06:41:07.255596] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:15.415 [2024-11-19 06:41:07.255606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:15.415 [2024-11-19 06:41:07.255615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:15.415 [2024-11-19 06:41:07.255627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:15.415 [2024-11-19 06:41:07.255634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:15.415 [2024-11-19 06:41:07.255644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.415 [2024-11-19 06:41:07.255652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:15.415 [2024-11-19 06:41:07.255663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:17:15.415 [2024-11-19 06:41:07.255670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.415 [2024-11-19 06:41:07.269745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.415 [2024-11-19 06:41:07.269916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:15.415 [2024-11-19 06:41:07.269958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.016 ms 00:17:15.415 [2024-11-19 06:41:07.269967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.415 [2024-11-19 06:41:07.270359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.415 [2024-11-19 06:41:07.270372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:15.415 [2024-11-19 06:41:07.270384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:17:15.415 [2024-11-19 06:41:07.270392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.415 [2024-11-19 06:41:07.309464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.415 [2024-11-19 06:41:07.309646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:15.415 [2024-11-19 06:41:07.309675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.415 [2024-11-19 06:41:07.309684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.415 [2024-11-19 06:41:07.309758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.415 [2024-11-19 06:41:07.309766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:15.415 [2024-11-19 06:41:07.309777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.415 [2024-11-19 06:41:07.309785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.415 [2024-11-19 06:41:07.309885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.415 [2024-11-19 06:41:07.309901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:15.415 [2024-11-19 06:41:07.309912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.415 [2024-11-19 06:41:07.309947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.415 [2024-11-19 06:41:07.309968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.415 [2024-11-19 06:41:07.309977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:15.415 [2024-11-19 06:41:07.309988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.415 [2024-11-19 06:41:07.309999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.676 [2024-11-19 06:41:07.395625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.676 [2024-11-19 06:41:07.395684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:15.676 [2024-11-19 06:41:07.395704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.676 [2024-11-19 06:41:07.395713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.676 [2024-11-19 06:41:07.464992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.676 [2024-11-19 06:41:07.465050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:15.676 [2024-11-19 06:41:07.465066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.676 [2024-11-19 06:41:07.465075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.676 [2024-11-19 06:41:07.465166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.676 [2024-11-19 06:41:07.465177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:15.676 [2024-11-19 06:41:07.465192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.676 [2024-11-19 06:41:07.465201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.676 [2024-11-19 06:41:07.465267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.676 [2024-11-19 06:41:07.465282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:15.676 [2024-11-19 06:41:07.465293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.676 [2024-11-19 06:41:07.465302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.676 [2024-11-19 06:41:07.465407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.676 [2024-11-19 06:41:07.465418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:15.676 [2024-11-19 06:41:07.465435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.676 [2024-11-19 06:41:07.465445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.676 [2024-11-19 06:41:07.465479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.676 [2024-11-19 06:41:07.465489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:15.676 [2024-11-19 06:41:07.465500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.676 [2024-11-19 06:41:07.465509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.676 [2024-11-19 06:41:07.465553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.676 [2024-11-19 06:41:07.465574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:15.676 [2024-11-19 06:41:07.465586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.676 [2024-11-19 06:41:07.465597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.676 [2024-11-19 06:41:07.465647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.676 [2024-11-19 06:41:07.465668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:15.676 [2024-11-19 06:41:07.465680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.676 [2024-11-19 06:41:07.465688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.676 [2024-11-19 06:41:07.465835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 589.574 ms, result 0 00:17:15.676 true 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73337 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 73337 ']' 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 73337 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73337 00:17:15.676 killing process with pid 73337 00:17:15.676 Received shutdown signal, test time was about 4.000000 seconds 00:17:15.676 00:17:15.676 Latency(us) 00:17:15.676 [2024-11-19T06:41:07.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.676 [2024-11-19T06:41:07.605Z] =================================================================================================================== 00:17:15.676 [2024-11-19T06:41:07.605Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73337' 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 73337 00:17:15.676 06:41:07 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 73337 00:17:18.221 Remove shared memory files 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:18.221 ************************************ 00:17:18.221 END TEST ftl_bdevperf 00:17:18.221 ************************************ 00:17:18.221 00:17:18.221 real 0m24.234s 00:17:18.221 user 0m26.817s 00:17:18.221 sys 0m0.959s 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.221 06:41:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:18.221 06:41:09 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:18.221 06:41:09 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:18.221 06:41:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.221 06:41:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:18.221 ************************************ 00:17:18.221 START TEST ftl_trim 00:17:18.221 ************************************ 00:17:18.221 06:41:09 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:18.221 * Looking for test storage... 00:17:18.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:18.221 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:18.221 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:18.221 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:17:18.221 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.221 06:41:10 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:18.221 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.221 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:18.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.221 --rc genhtml_branch_coverage=1 00:17:18.221 --rc genhtml_function_coverage=1 00:17:18.221 --rc genhtml_legend=1 00:17:18.221 --rc geninfo_all_blocks=1 00:17:18.221 --rc geninfo_unexecuted_blocks=1 00:17:18.221 00:17:18.221 ' 00:17:18.221 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:18.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.221 --rc genhtml_branch_coverage=1 00:17:18.221 --rc genhtml_function_coverage=1 00:17:18.221 --rc genhtml_legend=1 00:17:18.221 --rc geninfo_all_blocks=1 00:17:18.221 --rc geninfo_unexecuted_blocks=1 00:17:18.221 00:17:18.221 ' 00:17:18.221 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:18.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.221 --rc genhtml_branch_coverage=1 00:17:18.221 --rc genhtml_function_coverage=1 00:17:18.221 --rc genhtml_legend=1 00:17:18.221 --rc geninfo_all_blocks=1 00:17:18.221 --rc geninfo_unexecuted_blocks=1 00:17:18.221 00:17:18.221 ' 00:17:18.221 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:18.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.221 --rc genhtml_branch_coverage=1 00:17:18.221 --rc genhtml_function_coverage=1 00:17:18.221 --rc genhtml_legend=1 00:17:18.221 --rc geninfo_all_blocks=1 00:17:18.221 --rc geninfo_unexecuted_blocks=1 00:17:18.221 00:17:18.221 ' 00:17:18.221 06:41:10 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:18.221 06:41:10 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:18.482 06:41:10 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73691 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73691 00:17:18.483 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73691 ']' 00:17:18.483 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.483 06:41:10 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:18.483 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.483 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.483 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.483 06:41:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:18.483 [2024-11-19 06:41:10.266528] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:18.483 [2024-11-19 06:41:10.267625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73691 ] 00:17:18.801 [2024-11-19 06:41:10.437517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.801 [2024-11-19 06:41:10.561408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.801 [2024-11-19 06:41:10.561760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.801 [2024-11-19 06:41:10.561868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.375 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.375 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:19.375 06:41:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:19.375 06:41:11 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:19.375 06:41:11 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:19.375 06:41:11 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:19.375 06:41:11 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:19.375 06:41:11 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:19.949 06:41:11 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:19.949 06:41:11 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:19.949 06:41:11 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:19.949 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:19.949 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:19.949 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:19.949 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:19.949 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:20.211 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:20.211 { 00:17:20.211 "name": "nvme0n1", 00:17:20.211 "aliases": [ 00:17:20.211 "b7863e71-84dd-4311-acfb-5514a9be7de9" 00:17:20.211 ], 00:17:20.211 "product_name": "NVMe disk", 00:17:20.211 "block_size": 4096, 00:17:20.211 "num_blocks": 1310720, 00:17:20.211 "uuid": "b7863e71-84dd-4311-acfb-5514a9be7de9", 00:17:20.211 "numa_id": -1, 00:17:20.211 "assigned_rate_limits": { 00:17:20.211 "rw_ios_per_sec": 0, 00:17:20.211 "rw_mbytes_per_sec": 0, 00:17:20.211 "r_mbytes_per_sec": 0, 00:17:20.211 "w_mbytes_per_sec": 0 00:17:20.211 }, 00:17:20.211 "claimed": true, 00:17:20.211 "claim_type": "read_many_write_one", 00:17:20.211 "zoned": false, 00:17:20.211 "supported_io_types": { 00:17:20.211 "read": true, 00:17:20.211 "write": true, 00:17:20.211 "unmap": true, 00:17:20.211 "flush": true, 00:17:20.211 "reset": true, 00:17:20.211 "nvme_admin": true, 00:17:20.211 "nvme_io": true, 00:17:20.211 "nvme_io_md": false, 00:17:20.211 "write_zeroes": true, 00:17:20.211 "zcopy": false, 00:17:20.211 "get_zone_info": false, 00:17:20.211 "zone_management": false, 00:17:20.211 "zone_append": false, 00:17:20.211 "compare": true, 00:17:20.211 "compare_and_write": false, 00:17:20.211 "abort": true, 00:17:20.211 "seek_hole": false, 00:17:20.211 "seek_data": false, 00:17:20.211 "copy": true, 00:17:20.211 "nvme_iov_md": false 00:17:20.211 }, 00:17:20.211 "driver_specific": { 00:17:20.211 "nvme": [ 00:17:20.211 { 00:17:20.211 "pci_address": "0000:00:11.0", 00:17:20.211 "trid": { 00:17:20.211 "trtype": "PCIe", 00:17:20.211 "traddr": "0000:00:11.0" 00:17:20.211 }, 00:17:20.211 "ctrlr_data": { 00:17:20.211 "cntlid": 0, 00:17:20.211 "vendor_id": "0x1b36", 00:17:20.211 "model_number": "QEMU NVMe Ctrl", 00:17:20.211 "serial_number": "12341", 00:17:20.211 "firmware_revision": "8.0.0", 00:17:20.211 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:20.211 "oacs": { 00:17:20.211 "security": 0, 00:17:20.211 "format": 1, 00:17:20.211 "firmware": 0, 00:17:20.211 "ns_manage": 1 00:17:20.211 }, 00:17:20.211 "multi_ctrlr": false, 00:17:20.211 "ana_reporting": false 00:17:20.211 }, 00:17:20.211 "vs": { 00:17:20.211 "nvme_version": "1.4" 00:17:20.211 }, 00:17:20.211 "ns_data": { 00:17:20.211 "id": 1, 00:17:20.211 "can_share": false 00:17:20.211 } 00:17:20.211 } 00:17:20.211 ], 00:17:20.211 "mp_policy": "active_passive" 00:17:20.211 } 00:17:20.211 } 00:17:20.211 ]' 00:17:20.211 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:20.211 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:20.211 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:20.211 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:20.211 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:20.211 06:41:11 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:17:20.211 06:41:11 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:20.211 06:41:11 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:20.211 06:41:11 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:20.211 06:41:11 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:20.211 06:41:11 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:20.473 06:41:12 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=17262938-446d-4fa8-8eab-5374272bba0a 00:17:20.473 06:41:12 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:20.473 06:41:12 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 17262938-446d-4fa8-8eab-5374272bba0a 00:17:20.732 06:41:12 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:20.990 06:41:12 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=be657fd7-4215-43b2-a9dc-883ec14c429d 00:17:20.990 06:41:12 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u be657fd7-4215-43b2-a9dc-883ec14c429d 00:17:20.990 06:41:12 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=2eed2313-fa47-4159-a898-a4b959624116 00:17:20.990 06:41:12 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2eed2313-fa47-4159-a898-a4b959624116 00:17:20.990 06:41:12 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:20.990 06:41:12 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:20.990 06:41:12 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=2eed2313-fa47-4159-a898-a4b959624116 00:17:20.990 06:41:12 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:20.990 06:41:12 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 2eed2313-fa47-4159-a898-a4b959624116 00:17:20.990 06:41:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2eed2313-fa47-4159-a898-a4b959624116 00:17:20.990 06:41:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:20.990 06:41:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:20.990 06:41:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:20.990 06:41:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2eed2313-fa47-4159-a898-a4b959624116 00:17:21.249 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:21.249 { 00:17:21.249 "name": "2eed2313-fa47-4159-a898-a4b959624116", 00:17:21.249 "aliases": [ 00:17:21.249 "lvs/nvme0n1p0" 00:17:21.249 ], 00:17:21.249 "product_name": "Logical Volume", 00:17:21.249 "block_size": 4096, 00:17:21.249 "num_blocks": 26476544, 00:17:21.249 "uuid": "2eed2313-fa47-4159-a898-a4b959624116", 00:17:21.249 "assigned_rate_limits": { 00:17:21.249 "rw_ios_per_sec": 0, 00:17:21.249 "rw_mbytes_per_sec": 0, 00:17:21.249 "r_mbytes_per_sec": 0, 00:17:21.249 "w_mbytes_per_sec": 0 00:17:21.249 }, 00:17:21.249 "claimed": false, 00:17:21.249 "zoned": false, 00:17:21.249 "supported_io_types": { 00:17:21.249 "read": true, 00:17:21.249 "write": true, 00:17:21.249 "unmap": true, 00:17:21.249 "flush": false, 00:17:21.249 "reset": true, 00:17:21.249 "nvme_admin": false, 00:17:21.249 "nvme_io": false, 00:17:21.249 "nvme_io_md": false, 00:17:21.249 "write_zeroes": true, 00:17:21.249 "zcopy": false, 00:17:21.249 "get_zone_info": false, 00:17:21.249 "zone_management": false, 00:17:21.249 "zone_append": false, 00:17:21.249 "compare": false, 00:17:21.249 "compare_and_write": false, 00:17:21.249 "abort": false, 00:17:21.249 "seek_hole": true, 00:17:21.249 "seek_data": true, 00:17:21.249 "copy": false, 00:17:21.249 "nvme_iov_md": false 00:17:21.249 }, 00:17:21.249 "driver_specific": { 00:17:21.249 "lvol": { 00:17:21.249 "lvol_store_uuid": "be657fd7-4215-43b2-a9dc-883ec14c429d", 00:17:21.249 "base_bdev": "nvme0n1", 00:17:21.249 "thin_provision": true, 00:17:21.249 "num_allocated_clusters": 0, 00:17:21.249 "snapshot": false, 00:17:21.249 "clone": false, 00:17:21.249 "esnap_clone": false 00:17:21.249 } 00:17:21.249 } 00:17:21.249 } 00:17:21.249 ]' 00:17:21.249 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:21.249 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:21.249 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:21.249 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:21.249 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:21.249 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:21.249 06:41:13 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:21.249 06:41:13 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:21.249 06:41:13 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:21.508 06:41:13 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:21.508 06:41:13 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:21.508 06:41:13 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 2eed2313-fa47-4159-a898-a4b959624116 00:17:21.508 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2eed2313-fa47-4159-a898-a4b959624116 00:17:21.508 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:21.508 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:21.508 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:21.508 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2eed2313-fa47-4159-a898-a4b959624116 00:17:21.766 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:21.766 { 00:17:21.766 "name": "2eed2313-fa47-4159-a898-a4b959624116", 00:17:21.766 "aliases": [ 00:17:21.766 "lvs/nvme0n1p0" 00:17:21.766 ], 00:17:21.766 "product_name": "Logical Volume", 00:17:21.766 "block_size": 4096, 00:17:21.766 "num_blocks": 26476544, 00:17:21.766 "uuid": "2eed2313-fa47-4159-a898-a4b959624116", 00:17:21.766 "assigned_rate_limits": { 00:17:21.766 "rw_ios_per_sec": 0, 00:17:21.766 "rw_mbytes_per_sec": 0, 00:17:21.766 "r_mbytes_per_sec": 0, 00:17:21.766 "w_mbytes_per_sec": 0 00:17:21.766 }, 00:17:21.766 "claimed": false, 00:17:21.766 "zoned": false, 00:17:21.766 "supported_io_types": { 00:17:21.766 "read": true, 00:17:21.766 "write": true, 00:17:21.766 "unmap": true, 00:17:21.766 "flush": false, 00:17:21.766 "reset": true, 00:17:21.766 "nvme_admin": false, 00:17:21.766 "nvme_io": false, 00:17:21.766 "nvme_io_md": false, 00:17:21.766 "write_zeroes": true, 00:17:21.766 "zcopy": false, 00:17:21.766 "get_zone_info": false, 00:17:21.766 "zone_management": false, 00:17:21.766 "zone_append": false, 00:17:21.766 "compare": false, 00:17:21.766 "compare_and_write": false, 00:17:21.766 "abort": false, 00:17:21.766 "seek_hole": true, 00:17:21.766 "seek_data": true, 00:17:21.766 "copy": false, 00:17:21.767 "nvme_iov_md": false 00:17:21.767 }, 00:17:21.767 "driver_specific": { 00:17:21.767 "lvol": { 00:17:21.767 "lvol_store_uuid": "be657fd7-4215-43b2-a9dc-883ec14c429d", 00:17:21.767 "base_bdev": "nvme0n1", 00:17:21.767 "thin_provision": true, 00:17:21.767 "num_allocated_clusters": 0, 00:17:21.767 "snapshot": false, 00:17:21.767 "clone": false, 00:17:21.767 "esnap_clone": false 00:17:21.767 } 00:17:21.767 } 00:17:21.767 } 00:17:21.767 ]' 00:17:21.767 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:21.767 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:21.767 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:21.767 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:21.767 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:21.767 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:21.767 06:41:13 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:21.767 06:41:13 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:22.026 06:41:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:22.026 06:41:13 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:22.026 06:41:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 2eed2313-fa47-4159-a898-a4b959624116 00:17:22.026 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2eed2313-fa47-4159-a898-a4b959624116 00:17:22.026 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:22.026 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:22.026 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:22.026 06:41:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2eed2313-fa47-4159-a898-a4b959624116 00:17:22.285 06:41:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:22.285 { 00:17:22.285 "name": "2eed2313-fa47-4159-a898-a4b959624116", 00:17:22.285 "aliases": [ 00:17:22.285 "lvs/nvme0n1p0" 00:17:22.285 ], 00:17:22.285 "product_name": "Logical Volume", 00:17:22.285 "block_size": 4096, 00:17:22.285 "num_blocks": 26476544, 00:17:22.285 "uuid": "2eed2313-fa47-4159-a898-a4b959624116", 00:17:22.285 "assigned_rate_limits": { 00:17:22.285 "rw_ios_per_sec": 0, 00:17:22.285 "rw_mbytes_per_sec": 0, 00:17:22.285 "r_mbytes_per_sec": 0, 00:17:22.285 "w_mbytes_per_sec": 0 00:17:22.285 }, 00:17:22.285 "claimed": false, 00:17:22.285 "zoned": false, 00:17:22.285 "supported_io_types": { 00:17:22.285 "read": true, 00:17:22.285 "write": true, 00:17:22.285 "unmap": true, 00:17:22.285 "flush": false, 00:17:22.285 "reset": true, 00:17:22.285 "nvme_admin": false, 00:17:22.285 "nvme_io": false, 00:17:22.285 "nvme_io_md": false, 00:17:22.285 "write_zeroes": true, 00:17:22.285 "zcopy": false, 00:17:22.285 "get_zone_info": false, 00:17:22.285 "zone_management": false, 00:17:22.285 "zone_append": false, 00:17:22.285 "compare": false, 00:17:22.285 "compare_and_write": false, 00:17:22.285 "abort": false, 00:17:22.285 "seek_hole": true, 00:17:22.285 "seek_data": true, 00:17:22.285 "copy": false, 00:17:22.285 "nvme_iov_md": false 00:17:22.285 }, 00:17:22.285 "driver_specific": { 00:17:22.285 "lvol": { 00:17:22.285 "lvol_store_uuid": "be657fd7-4215-43b2-a9dc-883ec14c429d", 00:17:22.285 "base_bdev": "nvme0n1", 00:17:22.285 "thin_provision": true, 00:17:22.285 "num_allocated_clusters": 0, 00:17:22.285 "snapshot": false, 00:17:22.285 "clone": false, 00:17:22.285 "esnap_clone": false 00:17:22.285 } 00:17:22.285 } 00:17:22.285 } 00:17:22.285 ]' 00:17:22.285 06:41:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:22.285 06:41:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:22.285 06:41:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:22.285 06:41:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:22.285 06:41:14 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:22.285 06:41:14 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:22.285 06:41:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:22.285 06:41:14 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2eed2313-fa47-4159-a898-a4b959624116 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:22.545 [2024-11-19 06:41:14.287982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.288021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.545 [2024-11-19 06:41:14.288035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:22.545 [2024-11-19 06:41:14.288042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.290362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.290390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.545 [2024-11-19 06:41:14.290399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.288 ms 00:17:22.545 [2024-11-19 06:41:14.290405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.290487] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.545 [2024-11-19 06:41:14.291065] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.545 [2024-11-19 06:41:14.291166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.291175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.545 [2024-11-19 06:41:14.291184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:17:22.545 [2024-11-19 06:41:14.291190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.291297] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4ef7e4a2-b0d9-4872-9175-6abd80a5f735 00:17:22.545 [2024-11-19 06:41:14.292322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.292350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:22.545 [2024-11-19 06:41:14.292359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:22.545 [2024-11-19 06:41:14.292366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.297631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.297655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.545 [2024-11-19 06:41:14.297665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.187 ms 00:17:22.545 [2024-11-19 06:41:14.297674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.297779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.297789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.545 [2024-11-19 06:41:14.297796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:22.545 [2024-11-19 06:41:14.297806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.297835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.297844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.545 [2024-11-19 06:41:14.297850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.545 [2024-11-19 06:41:14.297857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.297888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:22.545 [2024-11-19 06:41:14.300801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.300826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.545 [2024-11-19 06:41:14.300838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:17:22.545 [2024-11-19 06:41:14.300844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.300890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.300896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.545 [2024-11-19 06:41:14.300904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:22.545 [2024-11-19 06:41:14.300929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.300961] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:22.545 [2024-11-19 06:41:14.301063] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:22.545 [2024-11-19 06:41:14.301075] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.545 [2024-11-19 06:41:14.301083] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:22.545 [2024-11-19 06:41:14.301092] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.545 [2024-11-19 06:41:14.301099] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.545 [2024-11-19 06:41:14.301106] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:22.545 [2024-11-19 06:41:14.301112] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.545 [2024-11-19 06:41:14.301118] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:22.545 [2024-11-19 06:41:14.301125] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:22.545 [2024-11-19 06:41:14.301132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.301138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.545 [2024-11-19 06:41:14.301144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:17:22.545 [2024-11-19 06:41:14.301161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.301237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.545 [2024-11-19 06:41:14.301244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.545 [2024-11-19 06:41:14.301251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:22.545 [2024-11-19 06:41:14.301256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.545 [2024-11-19 06:41:14.301356] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.545 [2024-11-19 06:41:14.301363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.546 [2024-11-19 06:41:14.301371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.546 [2024-11-19 06:41:14.301377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.546 [2024-11-19 06:41:14.301390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:22.546 [2024-11-19 06:41:14.301401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.546 [2024-11-19 06:41:14.301408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.546 [2024-11-19 06:41:14.301418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.546 [2024-11-19 06:41:14.301424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:22.546 [2024-11-19 06:41:14.301430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.546 [2024-11-19 06:41:14.301435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.546 [2024-11-19 06:41:14.301442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:22.546 [2024-11-19 06:41:14.301447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.546 [2024-11-19 06:41:14.301459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:22.546 [2024-11-19 06:41:14.301466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.546 [2024-11-19 06:41:14.301479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.546 [2024-11-19 06:41:14.301491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.546 [2024-11-19 06:41:14.301496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.546 [2024-11-19 06:41:14.301508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.546 [2024-11-19 06:41:14.301514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.546 [2024-11-19 06:41:14.301525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.546 [2024-11-19 06:41:14.301530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.546 [2024-11-19 06:41:14.301542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.546 [2024-11-19 06:41:14.301549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.546 [2024-11-19 06:41:14.301559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.546 [2024-11-19 06:41:14.301565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:22.546 [2024-11-19 06:41:14.301571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.546 [2024-11-19 06:41:14.301575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:22.546 [2024-11-19 06:41:14.301581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:22.546 [2024-11-19 06:41:14.301586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:22.546 [2024-11-19 06:41:14.301598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:22.546 [2024-11-19 06:41:14.301604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301609] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.546 [2024-11-19 06:41:14.301616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.546 [2024-11-19 06:41:14.301621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.546 [2024-11-19 06:41:14.301629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.546 [2024-11-19 06:41:14.301635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.546 [2024-11-19 06:41:14.301643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.546 [2024-11-19 06:41:14.301648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.546 [2024-11-19 06:41:14.301654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.546 [2024-11-19 06:41:14.301659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.546 [2024-11-19 06:41:14.301666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.546 [2024-11-19 06:41:14.301674] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.546 [2024-11-19 06:41:14.301683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.546 [2024-11-19 06:41:14.301689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:22.546 [2024-11-19 06:41:14.301696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:22.546 [2024-11-19 06:41:14.301702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:22.546 [2024-11-19 06:41:14.301708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:22.546 [2024-11-19 06:41:14.301714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:22.546 [2024-11-19 06:41:14.301720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:22.546 [2024-11-19 06:41:14.301726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:22.546 [2024-11-19 06:41:14.301732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:22.546 [2024-11-19 06:41:14.301738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:22.546 [2024-11-19 06:41:14.301745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:22.546 [2024-11-19 06:41:14.301751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:22.546 [2024-11-19 06:41:14.301757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:22.546 [2024-11-19 06:41:14.301763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:22.546 [2024-11-19 06:41:14.301769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:22.546 [2024-11-19 06:41:14.301775] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.546 [2024-11-19 06:41:14.301786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.546 [2024-11-19 06:41:14.301792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.546 [2024-11-19 06:41:14.301799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.546 [2024-11-19 06:41:14.301804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.546 [2024-11-19 06:41:14.301811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.546 [2024-11-19 06:41:14.301817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.546 [2024-11-19 06:41:14.301824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.546 [2024-11-19 06:41:14.301830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:17:22.546 [2024-11-19 06:41:14.301836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.546 [2024-11-19 06:41:14.301934] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:22.546 [2024-11-19 06:41:14.301946] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:25.077 [2024-11-19 06:41:16.895000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.077 [2024-11-19 06:41:16.895059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:25.077 [2024-11-19 06:41:16.895073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2593.054 ms 00:17:25.077 [2024-11-19 06:41:16.895084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.077 [2024-11-19 06:41:16.920501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.077 [2024-11-19 06:41:16.920543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.077 [2024-11-19 06:41:16.920555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.171 ms 00:17:25.077 [2024-11-19 06:41:16.920565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.077 [2024-11-19 06:41:16.920694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.077 [2024-11-19 06:41:16.920706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:25.077 [2024-11-19 06:41:16.920714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:25.077 [2024-11-19 06:41:16.920725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.077 [2024-11-19 06:41:16.961182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.077 [2024-11-19 06:41:16.961241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.077 [2024-11-19 06:41:16.961260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.406 ms 00:17:25.077 [2024-11-19 06:41:16.961277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.077 [2024-11-19 06:41:16.961391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.077 [2024-11-19 06:41:16.961412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.077 [2024-11-19 06:41:16.961425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:25.077 [2024-11-19 06:41:16.961440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.077 [2024-11-19 06:41:16.961828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.077 [2024-11-19 06:41:16.961855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.077 [2024-11-19 06:41:16.961869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:17:25.077 [2024-11-19 06:41:16.961883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.077 [2024-11-19 06:41:16.962087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.077 [2024-11-19 06:41:16.962111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.077 [2024-11-19 06:41:16.962124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:25.077 [2024-11-19 06:41:16.962141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.077 [2024-11-19 06:41:16.979772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.077 [2024-11-19 06:41:16.979932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.077 [2024-11-19 06:41:16.979949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.576 ms 00:17:25.077 [2024-11-19 06:41:16.979959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.077 [2024-11-19 06:41:16.991321] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:25.077 [2024-11-19 06:41:17.005830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.077 [2024-11-19 06:41:17.005861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:25.077 [2024-11-19 06:41:17.005874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.764 ms 00:17:25.077 [2024-11-19 06:41:17.005881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.079353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.079401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:25.337 [2024-11-19 06:41:17.079417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.366 ms 00:17:25.337 [2024-11-19 06:41:17.079425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.079656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.079669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:25.337 [2024-11-19 06:41:17.079682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:17:25.337 [2024-11-19 06:41:17.079690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.102870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.103033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:25.337 [2024-11-19 06:41:17.103056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.151 ms 00:17:25.337 [2024-11-19 06:41:17.103064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.125904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.125947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:25.337 [2024-11-19 06:41:17.125975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.734 ms 00:17:25.337 [2024-11-19 06:41:17.125982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.126564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.126587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:25.337 [2024-11-19 06:41:17.126598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:17:25.337 [2024-11-19 06:41:17.126605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.195378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.195412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:25.337 [2024-11-19 06:41:17.195429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.737 ms 00:17:25.337 [2024-11-19 06:41:17.195442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.219515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.219546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:25.337 [2024-11-19 06:41:17.219559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.972 ms 00:17:25.337 [2024-11-19 06:41:17.219566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.241986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.242017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:25.337 [2024-11-19 06:41:17.242029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.354 ms 00:17:25.337 [2024-11-19 06:41:17.242036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.264982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.265112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:25.337 [2024-11-19 06:41:17.265132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.873 ms 00:17:25.337 [2024-11-19 06:41:17.265152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.265214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.265226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:25.337 [2024-11-19 06:41:17.265239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:25.337 [2024-11-19 06:41:17.265247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.265328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.337 [2024-11-19 06:41:17.265337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:25.337 [2024-11-19 06:41:17.265347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:25.337 [2024-11-19 06:41:17.265354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.337 [2024-11-19 06:41:17.266237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.596 [2024-11-19 06:41:17.269367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2977.931 ms, result 0 00:17:25.596 [2024-11-19 06:41:17.270225] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:25.596 { 00:17:25.596 "name": "ftl0", 00:17:25.596 "uuid": "4ef7e4a2-b0d9-4872-9175-6abd80a5f735" 00:17:25.596 } 00:17:25.596 06:41:17 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:25.596 06:41:17 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:25.596 06:41:17 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:25.596 06:41:17 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:17:25.596 06:41:17 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:25.596 06:41:17 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:25.596 06:41:17 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:25.596 06:41:17 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:25.854 [ 00:17:25.854 { 00:17:25.854 "name": "ftl0", 00:17:25.854 "aliases": [ 00:17:25.854 "4ef7e4a2-b0d9-4872-9175-6abd80a5f735" 00:17:25.854 ], 00:17:25.854 "product_name": "FTL disk", 00:17:25.854 "block_size": 4096, 00:17:25.854 "num_blocks": 23592960, 00:17:25.854 "uuid": "4ef7e4a2-b0d9-4872-9175-6abd80a5f735", 00:17:25.854 "assigned_rate_limits": { 00:17:25.854 "rw_ios_per_sec": 0, 00:17:25.854 "rw_mbytes_per_sec": 0, 00:17:25.854 "r_mbytes_per_sec": 0, 00:17:25.854 "w_mbytes_per_sec": 0 00:17:25.854 }, 00:17:25.854 "claimed": false, 00:17:25.854 "zoned": false, 00:17:25.854 "supported_io_types": { 00:17:25.854 "read": true, 00:17:25.854 "write": true, 00:17:25.854 "unmap": true, 00:17:25.854 "flush": true, 00:17:25.854 "reset": false, 00:17:25.854 "nvme_admin": false, 00:17:25.854 "nvme_io": false, 00:17:25.854 "nvme_io_md": false, 00:17:25.854 "write_zeroes": true, 00:17:25.854 "zcopy": false, 00:17:25.854 "get_zone_info": false, 00:17:25.854 "zone_management": false, 00:17:25.854 "zone_append": false, 00:17:25.854 "compare": false, 00:17:25.854 "compare_and_write": false, 00:17:25.854 "abort": false, 00:17:25.854 "seek_hole": false, 00:17:25.854 "seek_data": false, 00:17:25.854 "copy": false, 00:17:25.854 "nvme_iov_md": false 00:17:25.854 }, 00:17:25.854 "driver_specific": { 00:17:25.854 "ftl": { 00:17:25.854 "base_bdev": "2eed2313-fa47-4159-a898-a4b959624116", 00:17:25.854 "cache": "nvc0n1p0" 00:17:25.854 } 00:17:25.854 } 00:17:25.854 } 00:17:25.854 ] 00:17:25.855 06:41:17 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:17:25.855 06:41:17 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:25.855 06:41:17 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:26.113 06:41:17 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:26.113 06:41:17 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:26.372 06:41:18 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:26.372 { 00:17:26.372 "name": "ftl0", 00:17:26.372 "aliases": [ 00:17:26.372 "4ef7e4a2-b0d9-4872-9175-6abd80a5f735" 00:17:26.372 ], 00:17:26.372 "product_name": "FTL disk", 00:17:26.372 "block_size": 4096, 00:17:26.372 "num_blocks": 23592960, 00:17:26.372 "uuid": "4ef7e4a2-b0d9-4872-9175-6abd80a5f735", 00:17:26.372 "assigned_rate_limits": { 00:17:26.372 "rw_ios_per_sec": 0, 00:17:26.372 "rw_mbytes_per_sec": 0, 00:17:26.372 "r_mbytes_per_sec": 0, 00:17:26.372 "w_mbytes_per_sec": 0 00:17:26.372 }, 00:17:26.372 "claimed": false, 00:17:26.372 "zoned": false, 00:17:26.372 "supported_io_types": { 00:17:26.372 "read": true, 00:17:26.372 "write": true, 00:17:26.372 "unmap": true, 00:17:26.372 "flush": true, 00:17:26.372 "reset": false, 00:17:26.372 "nvme_admin": false, 00:17:26.372 "nvme_io": false, 00:17:26.372 "nvme_io_md": false, 00:17:26.372 "write_zeroes": true, 00:17:26.372 "zcopy": false, 00:17:26.372 "get_zone_info": false, 00:17:26.372 "zone_management": false, 00:17:26.372 "zone_append": false, 00:17:26.372 "compare": false, 00:17:26.372 "compare_and_write": false, 00:17:26.372 "abort": false, 00:17:26.372 "seek_hole": false, 00:17:26.372 "seek_data": false, 00:17:26.372 "copy": false, 00:17:26.372 "nvme_iov_md": false 00:17:26.372 }, 00:17:26.372 "driver_specific": { 00:17:26.372 "ftl": { 00:17:26.372 "base_bdev": "2eed2313-fa47-4159-a898-a4b959624116", 00:17:26.372 "cache": "nvc0n1p0" 00:17:26.372 } 00:17:26.372 } 00:17:26.372 } 00:17:26.372 ]' 00:17:26.372 06:41:18 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:26.372 06:41:18 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:26.372 06:41:18 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:26.372 [2024-11-19 06:41:18.301735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.372 [2024-11-19 06:41:18.301773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:26.372 [2024-11-19 06:41:18.301785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.372 [2024-11-19 06:41:18.301796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.372 [2024-11-19 06:41:18.301829] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:26.632 [2024-11-19 06:41:18.303974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.304093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:26.632 [2024-11-19 06:41:18.304114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:17:26.632 [2024-11-19 06:41:18.304120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.304622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.304634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:26.632 [2024-11-19 06:41:18.304642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:17:26.632 [2024-11-19 06:41:18.304648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.307386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.307470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:26.632 [2024-11-19 06:41:18.307483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.715 ms 00:17:26.632 [2024-11-19 06:41:18.307488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.312765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.312788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:26.632 [2024-11-19 06:41:18.312798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:17:26.632 [2024-11-19 06:41:18.312804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.331180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.331210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:26.632 [2024-11-19 06:41:18.331223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.296 ms 00:17:26.632 [2024-11-19 06:41:18.331229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.343869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.343898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:26.632 [2024-11-19 06:41:18.343909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.580 ms 00:17:26.632 [2024-11-19 06:41:18.343917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.344094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.344107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:26.632 [2024-11-19 06:41:18.344116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:17:26.632 [2024-11-19 06:41:18.344122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.362521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.362633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:26.632 [2024-11-19 06:41:18.362649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.376 ms 00:17:26.632 [2024-11-19 06:41:18.362654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.380538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.380564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:26.632 [2024-11-19 06:41:18.380574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.815 ms 00:17:26.632 [2024-11-19 06:41:18.380580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.398144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.398169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:26.632 [2024-11-19 06:41:18.398179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.513 ms 00:17:26.632 [2024-11-19 06:41:18.398184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.415307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.632 [2024-11-19 06:41:18.415341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:26.632 [2024-11-19 06:41:18.415350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.025 ms 00:17:26.632 [2024-11-19 06:41:18.415356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.632 [2024-11-19 06:41:18.415418] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:26.632 [2024-11-19 06:41:18.415447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:26.632 [2024-11-19 06:41:18.415600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.415997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:26.633 [2024-11-19 06:41:18.416152] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:26.633 [2024-11-19 06:41:18.416161] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef7e4a2-b0d9-4872-9175-6abd80a5f735 00:17:26.633 [2024-11-19 06:41:18.416167] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:26.633 [2024-11-19 06:41:18.416173] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:26.633 [2024-11-19 06:41:18.416178] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:26.633 [2024-11-19 06:41:18.416185] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:26.633 [2024-11-19 06:41:18.416192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:26.633 [2024-11-19 06:41:18.416199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:26.633 [2024-11-19 06:41:18.416204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:26.633 [2024-11-19 06:41:18.416211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:26.633 [2024-11-19 06:41:18.416215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:26.633 [2024-11-19 06:41:18.416222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.634 [2024-11-19 06:41:18.416228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:26.634 [2024-11-19 06:41:18.416236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:17:26.634 [2024-11-19 06:41:18.416242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.634 [2024-11-19 06:41:18.426043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.634 [2024-11-19 06:41:18.426068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:26.634 [2024-11-19 06:41:18.426081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.767 ms 00:17:26.634 [2024-11-19 06:41:18.426087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.634 [2024-11-19 06:41:18.426392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.634 [2024-11-19 06:41:18.426400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:26.634 [2024-11-19 06:41:18.426408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:17:26.634 [2024-11-19 06:41:18.426414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.634 [2024-11-19 06:41:18.461602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.634 [2024-11-19 06:41:18.461710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.634 [2024-11-19 06:41:18.461726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.634 [2024-11-19 06:41:18.461732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.634 [2024-11-19 06:41:18.461810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.634 [2024-11-19 06:41:18.461818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.634 [2024-11-19 06:41:18.461825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.634 [2024-11-19 06:41:18.461831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.634 [2024-11-19 06:41:18.461887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.634 [2024-11-19 06:41:18.461894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.634 [2024-11-19 06:41:18.461905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.634 [2024-11-19 06:41:18.461911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.634 [2024-11-19 06:41:18.461957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.634 [2024-11-19 06:41:18.461964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.634 [2024-11-19 06:41:18.461972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.634 [2024-11-19 06:41:18.461977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.634 [2024-11-19 06:41:18.525600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.634 [2024-11-19 06:41:18.525639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.634 [2024-11-19 06:41:18.525648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.634 [2024-11-19 06:41:18.525654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.893 [2024-11-19 06:41:18.574425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.893 [2024-11-19 06:41:18.574461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.893 [2024-11-19 06:41:18.574471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.893 [2024-11-19 06:41:18.574477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.893 [2024-11-19 06:41:18.574539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.893 [2024-11-19 06:41:18.574546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.893 [2024-11-19 06:41:18.574567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.893 [2024-11-19 06:41:18.574575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.893 [2024-11-19 06:41:18.574623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.893 [2024-11-19 06:41:18.574629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.893 [2024-11-19 06:41:18.574637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.893 [2024-11-19 06:41:18.574643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.893 [2024-11-19 06:41:18.574729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.893 [2024-11-19 06:41:18.574737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.893 [2024-11-19 06:41:18.574744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.893 [2024-11-19 06:41:18.574750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.893 [2024-11-19 06:41:18.574796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.893 [2024-11-19 06:41:18.574803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:26.893 [2024-11-19 06:41:18.574810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.893 [2024-11-19 06:41:18.574816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.893 [2024-11-19 06:41:18.574861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.893 [2024-11-19 06:41:18.574868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.893 [2024-11-19 06:41:18.574877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.893 [2024-11-19 06:41:18.574883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.893 [2024-11-19 06:41:18.574944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.893 [2024-11-19 06:41:18.574952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.893 [2024-11-19 06:41:18.574960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.893 [2024-11-19 06:41:18.574965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.893 [2024-11-19 06:41:18.575130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 273.388 ms, result 0 00:17:26.893 true 00:17:26.893 06:41:18 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73691 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73691 ']' 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73691 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73691 00:17:26.893 killing process with pid 73691 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73691' 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73691 00:17:26.893 06:41:18 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73691 00:17:32.173 06:41:24 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:33.559 65536+0 records in 00:17:33.559 65536+0 records out 00:17:33.559 268435456 bytes (268 MB, 256 MiB) copied, 1.10846 s, 242 MB/s 00:17:33.559 06:41:25 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:33.559 [2024-11-19 06:41:25.250119] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:33.559 [2024-11-19 06:41:25.250393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73876 ] 00:17:33.559 [2024-11-19 06:41:25.406451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.559 [2024-11-19 06:41:25.488330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.818 [2024-11-19 06:41:25.694696] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:33.818 [2024-11-19 06:41:25.694743] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:34.077 [2024-11-19 06:41:25.849179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.077 [2024-11-19 06:41:25.849321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:34.077 [2024-11-19 06:41:25.849337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:34.077 [2024-11-19 06:41:25.849343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.077 [2024-11-19 06:41:25.851420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.077 [2024-11-19 06:41:25.851457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.077 [2024-11-19 06:41:25.851465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.058 ms 00:17:34.077 [2024-11-19 06:41:25.851470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.077 [2024-11-19 06:41:25.851531] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:34.077 [2024-11-19 06:41:25.852258] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:34.077 [2024-11-19 06:41:25.852296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.852304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.078 [2024-11-19 06:41:25.852311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:17:34.078 [2024-11-19 06:41:25.852317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.853408] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:34.078 [2024-11-19 06:41:25.863204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.863328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:34.078 [2024-11-19 06:41:25.863341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.797 ms 00:17:34.078 [2024-11-19 06:41:25.863348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.863413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.863422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:34.078 [2024-11-19 06:41:25.863428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:34.078 [2024-11-19 06:41:25.863434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.867896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.867921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.078 [2024-11-19 06:41:25.867941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.424 ms 00:17:34.078 [2024-11-19 06:41:25.867947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.868023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.868031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.078 [2024-11-19 06:41:25.868037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:34.078 [2024-11-19 06:41:25.868043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.868059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.868068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:34.078 [2024-11-19 06:41:25.868075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:34.078 [2024-11-19 06:41:25.868080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.868098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:34.078 [2024-11-19 06:41:25.870627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.870733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.078 [2024-11-19 06:41:25.870745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.533 ms 00:17:34.078 [2024-11-19 06:41:25.870751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.870781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.870787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:34.078 [2024-11-19 06:41:25.870794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:34.078 [2024-11-19 06:41:25.870799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.870812] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:34.078 [2024-11-19 06:41:25.870830] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:34.078 [2024-11-19 06:41:25.870857] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:34.078 [2024-11-19 06:41:25.870868] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:34.078 [2024-11-19 06:41:25.870963] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:34.078 [2024-11-19 06:41:25.870971] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:34.078 [2024-11-19 06:41:25.870980] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:34.078 [2024-11-19 06:41:25.870987] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:34.078 [2024-11-19 06:41:25.870996] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:34.078 [2024-11-19 06:41:25.871002] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:34.078 [2024-11-19 06:41:25.871008] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:34.078 [2024-11-19 06:41:25.871014] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:34.078 [2024-11-19 06:41:25.871019] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:34.078 [2024-11-19 06:41:25.871025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.871030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:34.078 [2024-11-19 06:41:25.871037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:34.078 [2024-11-19 06:41:25.871042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.871109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.078 [2024-11-19 06:41:25.871115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:34.078 [2024-11-19 06:41:25.871123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:34.078 [2024-11-19 06:41:25.871128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.078 [2024-11-19 06:41:25.871203] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:34.078 [2024-11-19 06:41:25.871209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:34.078 [2024-11-19 06:41:25.871216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.078 [2024-11-19 06:41:25.871221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:34.078 [2024-11-19 06:41:25.871232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:34.078 [2024-11-19 06:41:25.871242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:34.078 [2024-11-19 06:41:25.871248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.078 [2024-11-19 06:41:25.871258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:34.078 [2024-11-19 06:41:25.871264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:34.078 [2024-11-19 06:41:25.871269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.078 [2024-11-19 06:41:25.871278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:34.078 [2024-11-19 06:41:25.871285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:34.078 [2024-11-19 06:41:25.871290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:34.078 [2024-11-19 06:41:25.871300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:34.078 [2024-11-19 06:41:25.871305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:34.078 [2024-11-19 06:41:25.871314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.078 [2024-11-19 06:41:25.871324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:34.078 [2024-11-19 06:41:25.871329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.078 [2024-11-19 06:41:25.871339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:34.078 [2024-11-19 06:41:25.871344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.078 [2024-11-19 06:41:25.871354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:34.078 [2024-11-19 06:41:25.871359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.078 [2024-11-19 06:41:25.871368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:34.078 [2024-11-19 06:41:25.871373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.078 [2024-11-19 06:41:25.871383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:34.078 [2024-11-19 06:41:25.871388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:34.078 [2024-11-19 06:41:25.871392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.078 [2024-11-19 06:41:25.871397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:34.078 [2024-11-19 06:41:25.871402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:34.078 [2024-11-19 06:41:25.871407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:34.078 [2024-11-19 06:41:25.871416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:34.078 [2024-11-19 06:41:25.871421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871426] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:34.078 [2024-11-19 06:41:25.871432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:34.078 [2024-11-19 06:41:25.871453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.078 [2024-11-19 06:41:25.871461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.078 [2024-11-19 06:41:25.871467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:34.078 [2024-11-19 06:41:25.871472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:34.078 [2024-11-19 06:41:25.871477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:34.079 [2024-11-19 06:41:25.871483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:34.079 [2024-11-19 06:41:25.871488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:34.079 [2024-11-19 06:41:25.871493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:34.079 [2024-11-19 06:41:25.871499] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:34.079 [2024-11-19 06:41:25.871506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.079 [2024-11-19 06:41:25.871513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:34.079 [2024-11-19 06:41:25.871518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:34.079 [2024-11-19 06:41:25.871524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:34.079 [2024-11-19 06:41:25.871529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:34.079 [2024-11-19 06:41:25.871534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:34.079 [2024-11-19 06:41:25.871540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:34.079 [2024-11-19 06:41:25.871545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:34.079 [2024-11-19 06:41:25.871550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:34.079 [2024-11-19 06:41:25.871555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:34.079 [2024-11-19 06:41:25.871561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:34.079 [2024-11-19 06:41:25.871566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:34.079 [2024-11-19 06:41:25.871572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:34.079 [2024-11-19 06:41:25.871577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:34.079 [2024-11-19 06:41:25.871583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:34.079 [2024-11-19 06:41:25.871588] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:34.079 [2024-11-19 06:41:25.871594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.079 [2024-11-19 06:41:25.871600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:34.079 [2024-11-19 06:41:25.871605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:34.079 [2024-11-19 06:41:25.871611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:34.079 [2024-11-19 06:41:25.871616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:34.079 [2024-11-19 06:41:25.871622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.871627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:34.079 [2024-11-19 06:41:25.871634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:17:34.079 [2024-11-19 06:41:25.871640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.892905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.893026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.079 [2024-11-19 06:41:25.893070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.225 ms 00:17:34.079 [2024-11-19 06:41:25.893088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.893203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.893230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:34.079 [2024-11-19 06:41:25.893245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:34.079 [2024-11-19 06:41:25.893259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.930490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.930603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.079 [2024-11-19 06:41:25.930649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.205 ms 00:17:34.079 [2024-11-19 06:41:25.930672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.930738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.930760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.079 [2024-11-19 06:41:25.930775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:34.079 [2024-11-19 06:41:25.930789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.931095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.931133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.079 [2024-11-19 06:41:25.931150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:34.079 [2024-11-19 06:41:25.931264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.931385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.931451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.079 [2024-11-19 06:41:25.931500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:34.079 [2024-11-19 06:41:25.931516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.942296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.942382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.079 [2024-11-19 06:41:25.942419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.753 ms 00:17:34.079 [2024-11-19 06:41:25.942436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.952323] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:34.079 [2024-11-19 06:41:25.952428] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:34.079 [2024-11-19 06:41:25.952479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.952496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:34.079 [2024-11-19 06:41:25.952511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.946 ms 00:17:34.079 [2024-11-19 06:41:25.952524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.971172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.971265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:34.079 [2024-11-19 06:41:25.971312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.588 ms 00:17:34.079 [2024-11-19 06:41:25.971329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.980440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.980527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:34.079 [2024-11-19 06:41:25.980563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.052 ms 00:17:34.079 [2024-11-19 06:41:25.980579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.989336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.989425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:34.079 [2024-11-19 06:41:25.989462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.711 ms 00:17:34.079 [2024-11-19 06:41:25.989478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.079 [2024-11-19 06:41:25.989959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.079 [2024-11-19 06:41:25.990033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:34.079 [2024-11-19 06:41:25.990071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:17:34.079 [2024-11-19 06:41:25.990087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.338 [2024-11-19 06:41:26.034569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.338 [2024-11-19 06:41:26.034689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:34.338 [2024-11-19 06:41:26.034729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.454 ms 00:17:34.338 [2024-11-19 06:41:26.034746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.338 [2024-11-19 06:41:26.042731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:34.338 [2024-11-19 06:41:26.054309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.338 [2024-11-19 06:41:26.054415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:34.338 [2024-11-19 06:41:26.054452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.499 ms 00:17:34.338 [2024-11-19 06:41:26.054470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.338 [2024-11-19 06:41:26.054554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.338 [2024-11-19 06:41:26.054578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:34.338 [2024-11-19 06:41:26.054594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:34.338 [2024-11-19 06:41:26.054608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.338 [2024-11-19 06:41:26.054656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.338 [2024-11-19 06:41:26.054673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:34.338 [2024-11-19 06:41:26.054688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:34.338 [2024-11-19 06:41:26.054743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.338 [2024-11-19 06:41:26.054778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.339 [2024-11-19 06:41:26.054822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:34.339 [2024-11-19 06:41:26.054842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:34.339 [2024-11-19 06:41:26.054895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.339 [2024-11-19 06:41:26.054956] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:34.339 [2024-11-19 06:41:26.054977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.339 [2024-11-19 06:41:26.054991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:34.339 [2024-11-19 06:41:26.055007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:34.339 [2024-11-19 06:41:26.055117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.339 [2024-11-19 06:41:26.073146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.339 [2024-11-19 06:41:26.073243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:34.339 [2024-11-19 06:41:26.073282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.961 ms 00:17:34.339 [2024-11-19 06:41:26.073299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.339 [2024-11-19 06:41:26.073376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.339 [2024-11-19 06:41:26.073397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:34.339 [2024-11-19 06:41:26.073413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:34.339 [2024-11-19 06:41:26.073427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.339 [2024-11-19 06:41:26.074434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:34.339 [2024-11-19 06:41:26.077001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.033 ms, result 0 00:17:34.339 [2024-11-19 06:41:26.077724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:34.339 [2024-11-19 06:41:26.088520] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.273  [2024-11-19T06:41:28.138Z] Copying: 26/256 [MB] (26 MBps) [2024-11-19T06:41:29.526Z] Copying: 53/256 [MB] (27 MBps) [2024-11-19T06:41:30.100Z] Copying: 67/256 [MB] (13 MBps) [2024-11-19T06:41:31.487Z] Copying: 82/256 [MB] (14 MBps) [2024-11-19T06:41:32.434Z] Copying: 95/256 [MB] (13 MBps) [2024-11-19T06:41:33.381Z] Copying: 109/256 [MB] (14 MBps) [2024-11-19T06:41:34.325Z] Copying: 124/256 [MB] (14 MBps) [2024-11-19T06:41:35.265Z] Copying: 137/256 [MB] (12 MBps) [2024-11-19T06:41:36.201Z] Copying: 153/256 [MB] (16 MBps) [2024-11-19T06:41:37.143Z] Copying: 175/256 [MB] (21 MBps) [2024-11-19T06:41:38.517Z] Copying: 190/256 [MB] (14 MBps) [2024-11-19T06:41:39.451Z] Copying: 207/256 [MB] (17 MBps) [2024-11-19T06:41:40.385Z] Copying: 222/256 [MB] (14 MBps) [2024-11-19T06:41:40.953Z] Copying: 243/256 [MB] (21 MBps) [2024-11-19T06:41:40.953Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-19 06:41:40.913846] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:49.024 [2024-11-19 06:41:40.921144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.024 [2024-11-19 06:41:40.921172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:49.024 [2024-11-19 06:41:40.921182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:49.024 [2024-11-19 06:41:40.921189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.024 [2024-11-19 06:41:40.921204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:49.024 [2024-11-19 06:41:40.923234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.024 [2024-11-19 06:41:40.923260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:49.024 [2024-11-19 06:41:40.923268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.021 ms 00:17:49.024 [2024-11-19 06:41:40.923275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.024 [2024-11-19 06:41:40.925074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.024 [2024-11-19 06:41:40.925099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:49.024 [2024-11-19 06:41:40.925107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:17:49.024 [2024-11-19 06:41:40.925113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.024 [2024-11-19 06:41:40.930947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.024 [2024-11-19 06:41:40.930971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:49.024 [2024-11-19 06:41:40.930983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.821 ms 00:17:49.024 [2024-11-19 06:41:40.930989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.024 [2024-11-19 06:41:40.936456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.024 [2024-11-19 06:41:40.936561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:49.024 [2024-11-19 06:41:40.936573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.442 ms 00:17:49.025 [2024-11-19 06:41:40.936579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.025 [2024-11-19 06:41:40.954689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.025 [2024-11-19 06:41:40.954715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:49.025 [2024-11-19 06:41:40.954723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.067 ms 00:17:49.025 [2024-11-19 06:41:40.954729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.286 [2024-11-19 06:41:40.966502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.286 [2024-11-19 06:41:40.966605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:49.286 [2024-11-19 06:41:40.966621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.746 ms 00:17:49.286 [2024-11-19 06:41:40.966629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.286 [2024-11-19 06:41:40.966719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.286 [2024-11-19 06:41:40.966726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:49.286 [2024-11-19 06:41:40.966732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:49.286 [2024-11-19 06:41:40.966738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.286 [2024-11-19 06:41:40.984658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.286 [2024-11-19 06:41:40.984681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:49.286 [2024-11-19 06:41:40.984689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.908 ms 00:17:49.286 [2024-11-19 06:41:40.984694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.286 [2024-11-19 06:41:41.002831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.286 [2024-11-19 06:41:41.002855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:49.286 [2024-11-19 06:41:41.002863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.104 ms 00:17:49.286 [2024-11-19 06:41:41.002868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.286 [2024-11-19 06:41:41.020371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.286 [2024-11-19 06:41:41.020459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:49.286 [2024-11-19 06:41:41.020471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.477 ms 00:17:49.286 [2024-11-19 06:41:41.020476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.286 [2024-11-19 06:41:41.038166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.286 [2024-11-19 06:41:41.038191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:49.286 [2024-11-19 06:41:41.038198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.648 ms 00:17:49.286 [2024-11-19 06:41:41.038203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.286 [2024-11-19 06:41:41.038228] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:49.286 [2024-11-19 06:41:41.038242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:49.286 [2024-11-19 06:41:41.038354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:49.287 [2024-11-19 06:41:41.038788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:49.288 [2024-11-19 06:41:41.038794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:49.288 [2024-11-19 06:41:41.038800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:49.288 [2024-11-19 06:41:41.038806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:49.288 [2024-11-19 06:41:41.038817] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:49.288 [2024-11-19 06:41:41.038823] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef7e4a2-b0d9-4872-9175-6abd80a5f735 00:17:49.288 [2024-11-19 06:41:41.038829] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:49.288 [2024-11-19 06:41:41.038834] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:49.288 [2024-11-19 06:41:41.038839] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:49.288 [2024-11-19 06:41:41.038845] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:49.288 [2024-11-19 06:41:41.038851] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:49.288 [2024-11-19 06:41:41.038856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:49.288 [2024-11-19 06:41:41.038862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:49.288 [2024-11-19 06:41:41.038867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:49.288 [2024-11-19 06:41:41.038872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:49.288 [2024-11-19 06:41:41.038877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.288 [2024-11-19 06:41:41.038883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:49.288 [2024-11-19 06:41:41.038891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:17:49.288 [2024-11-19 06:41:41.038896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.048572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.288 [2024-11-19 06:41:41.048650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:49.288 [2024-11-19 06:41:41.048710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.663 ms 00:17:49.288 [2024-11-19 06:41:41.048729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.049023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.288 [2024-11-19 06:41:41.049093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:49.288 [2024-11-19 06:41:41.049154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:49.288 [2024-11-19 06:41:41.049170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.076582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.076669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.288 [2024-11-19 06:41:41.076725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.076743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.076814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.076835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.288 [2024-11-19 06:41:41.076850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.076895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.076950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.076969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.288 [2024-11-19 06:41:41.076984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.076998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.077020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.077069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.288 [2024-11-19 06:41:41.077090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.077104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.136195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.136291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.288 [2024-11-19 06:41:41.136328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.136345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.184787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.184880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.288 [2024-11-19 06:41:41.184933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.184951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.185006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.185138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.288 [2024-11-19 06:41:41.185157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.185172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.185204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.185247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.288 [2024-11-19 06:41:41.185265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.185283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.185363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.185418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.288 [2024-11-19 06:41:41.185433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.185447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.185507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.185526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.288 [2024-11-19 06:41:41.185541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.185556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.185626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.185645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.288 [2024-11-19 06:41:41.185660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.185674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.185713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.288 [2024-11-19 06:41:41.185737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.288 [2024-11-19 06:41:41.185752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.288 [2024-11-19 06:41:41.185770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.288 [2024-11-19 06:41:41.185879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 264.728 ms, result 0 00:17:50.287 00:17:50.287 00:17:50.287 06:41:42 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74056 00:17:50.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.287 06:41:42 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:50.287 06:41:42 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74056 00:17:50.287 06:41:42 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74056 ']' 00:17:50.287 06:41:42 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.287 06:41:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.287 06:41:42 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.287 06:41:42 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.287 06:41:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:50.287 [2024-11-19 06:41:42.104510] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:50.287 [2024-11-19 06:41:42.104641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74056 ] 00:17:50.547 [2024-11-19 06:41:42.261988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.547 [2024-11-19 06:41:42.340027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.114 06:41:42 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.114 06:41:42 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:51.114 06:41:42 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:51.373 [2024-11-19 06:41:43.133059] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.373 [2024-11-19 06:41:43.133105] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.373 [2024-11-19 06:41:43.303686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.373 [2024-11-19 06:41:43.303817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:51.373 [2024-11-19 06:41:43.303836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:51.373 [2024-11-19 06:41:43.303843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 06:41:43.305912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.632 [2024-11-19 06:41:43.305948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.632 [2024-11-19 06:41:43.305958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:17:51.633 [2024-11-19 06:41:43.305963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.306021] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:51.633 [2024-11-19 06:41:43.306552] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:51.633 [2024-11-19 06:41:43.306574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.306580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.633 [2024-11-19 06:41:43.306588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:17:51.633 [2024-11-19 06:41:43.306594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.307598] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:51.633 [2024-11-19 06:41:43.317149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.317262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:51.633 [2024-11-19 06:41:43.317275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.556 ms 00:17:51.633 [2024-11-19 06:41:43.317282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.317339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.317349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:51.633 [2024-11-19 06:41:43.317356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:51.633 [2024-11-19 06:41:43.317362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.321723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.321749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.633 [2024-11-19 06:41:43.321757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.324 ms 00:17:51.633 [2024-11-19 06:41:43.321764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.321846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.321855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.633 [2024-11-19 06:41:43.321861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:51.633 [2024-11-19 06:41:43.321870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.321892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.321899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:51.633 [2024-11-19 06:41:43.321905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:51.633 [2024-11-19 06:41:43.321912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.321942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:51.633 [2024-11-19 06:41:43.324580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.324681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.633 [2024-11-19 06:41:43.324695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:17:51.633 [2024-11-19 06:41:43.324700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.324730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.324737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:51.633 [2024-11-19 06:41:43.324745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:51.633 [2024-11-19 06:41:43.324752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.324768] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:51.633 [2024-11-19 06:41:43.324782] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:51.633 [2024-11-19 06:41:43.324814] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:51.633 [2024-11-19 06:41:43.324826] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:51.633 [2024-11-19 06:41:43.324906] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:51.633 [2024-11-19 06:41:43.324915] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:51.633 [2024-11-19 06:41:43.324942] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:51.633 [2024-11-19 06:41:43.324950] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:51.633 [2024-11-19 06:41:43.324959] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:51.633 [2024-11-19 06:41:43.324965] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:51.633 [2024-11-19 06:41:43.324972] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:51.633 [2024-11-19 06:41:43.324978] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:51.633 [2024-11-19 06:41:43.324985] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:51.633 [2024-11-19 06:41:43.324991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.324998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:51.633 [2024-11-19 06:41:43.325004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:17:51.633 [2024-11-19 06:41:43.325012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.325078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.633 [2024-11-19 06:41:43.325086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:51.633 [2024-11-19 06:41:43.325091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:51.633 [2024-11-19 06:41:43.325098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.633 [2024-11-19 06:41:43.325173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:51.633 [2024-11-19 06:41:43.325181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:51.633 [2024-11-19 06:41:43.325187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.633 [2024-11-19 06:41:43.325195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:51.633 [2024-11-19 06:41:43.325207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:51.633 [2024-11-19 06:41:43.325221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:51.633 [2024-11-19 06:41:43.325227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.633 [2024-11-19 06:41:43.325239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:51.633 [2024-11-19 06:41:43.325246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:51.633 [2024-11-19 06:41:43.325250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.633 [2024-11-19 06:41:43.325257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:51.633 [2024-11-19 06:41:43.325262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:51.633 [2024-11-19 06:41:43.325269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:51.633 [2024-11-19 06:41:43.325281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:51.633 [2024-11-19 06:41:43.325286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:51.633 [2024-11-19 06:41:43.325301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.633 [2024-11-19 06:41:43.325312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:51.633 [2024-11-19 06:41:43.325319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.633 [2024-11-19 06:41:43.325331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:51.633 [2024-11-19 06:41:43.325336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.633 [2024-11-19 06:41:43.325347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:51.633 [2024-11-19 06:41:43.325353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.633 [2024-11-19 06:41:43.325364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:51.633 [2024-11-19 06:41:43.325368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:51.633 [2024-11-19 06:41:43.325375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.633 [2024-11-19 06:41:43.325380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:51.633 [2024-11-19 06:41:43.325387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:51.633 [2024-11-19 06:41:43.325391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.634 [2024-11-19 06:41:43.325399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:51.634 [2024-11-19 06:41:43.325404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:51.634 [2024-11-19 06:41:43.325411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.634 [2024-11-19 06:41:43.325416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:51.634 [2024-11-19 06:41:43.325422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:51.634 [2024-11-19 06:41:43.325427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.634 [2024-11-19 06:41:43.325433] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:51.634 [2024-11-19 06:41:43.325441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:51.634 [2024-11-19 06:41:43.325447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.634 [2024-11-19 06:41:43.325453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.634 [2024-11-19 06:41:43.325460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:51.634 [2024-11-19 06:41:43.325465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:51.634 [2024-11-19 06:41:43.325472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:51.634 [2024-11-19 06:41:43.325477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:51.634 [2024-11-19 06:41:43.325483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:51.634 [2024-11-19 06:41:43.325488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:51.634 [2024-11-19 06:41:43.325495] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:51.634 [2024-11-19 06:41:43.325501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.634 [2024-11-19 06:41:43.325510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:51.634 [2024-11-19 06:41:43.325516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:51.634 [2024-11-19 06:41:43.325524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:51.634 [2024-11-19 06:41:43.325529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:51.634 [2024-11-19 06:41:43.325536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:51.634 [2024-11-19 06:41:43.325541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:51.634 [2024-11-19 06:41:43.325547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:51.634 [2024-11-19 06:41:43.325552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:51.634 [2024-11-19 06:41:43.325559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:51.634 [2024-11-19 06:41:43.325564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:51.634 [2024-11-19 06:41:43.325571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:51.634 [2024-11-19 06:41:43.325576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:51.634 [2024-11-19 06:41:43.325582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:51.634 [2024-11-19 06:41:43.325588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:51.634 [2024-11-19 06:41:43.325595] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:51.634 [2024-11-19 06:41:43.325601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.634 [2024-11-19 06:41:43.325609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:51.634 [2024-11-19 06:41:43.325615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:51.634 [2024-11-19 06:41:43.325622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:51.634 [2024-11-19 06:41:43.325627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:51.634 [2024-11-19 06:41:43.325634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.325640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:51.634 [2024-11-19 06:41:43.325646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:17:51.634 [2024-11-19 06:41:43.325653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.346508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.346600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.634 [2024-11-19 06:41:43.346646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.812 ms 00:17:51.634 [2024-11-19 06:41:43.346665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.346765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.346785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:51.634 [2024-11-19 06:41:43.346801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:51.634 [2024-11-19 06:41:43.346816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.370390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.370480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.634 [2024-11-19 06:41:43.370520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.547 ms 00:17:51.634 [2024-11-19 06:41:43.370537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.370590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.370608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.634 [2024-11-19 06:41:43.370624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:51.634 [2024-11-19 06:41:43.370639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.370920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.370960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.634 [2024-11-19 06:41:43.370979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:17:51.634 [2024-11-19 06:41:43.370993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.371101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.371170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.634 [2024-11-19 06:41:43.371187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:51.634 [2024-11-19 06:41:43.371201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.382742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.382825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.634 [2024-11-19 06:41:43.382864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.469 ms 00:17:51.634 [2024-11-19 06:41:43.382881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.392696] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:51.634 [2024-11-19 06:41:43.392798] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:51.634 [2024-11-19 06:41:43.392846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.392863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:51.634 [2024-11-19 06:41:43.392879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.872 ms 00:17:51.634 [2024-11-19 06:41:43.392894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.411593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.411680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:51.634 [2024-11-19 06:41:43.411721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.612 ms 00:17:51.634 [2024-11-19 06:41:43.411738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.420952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.421031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:51.634 [2024-11-19 06:41:43.421071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.154 ms 00:17:51.634 [2024-11-19 06:41:43.421088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.430073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.430153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:51.634 [2024-11-19 06:41:43.430194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.939 ms 00:17:51.634 [2024-11-19 06:41:43.430211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.430673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.430744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:51.634 [2024-11-19 06:41:43.430788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:17:51.634 [2024-11-19 06:41:43.430805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.487216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.487343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:51.634 [2024-11-19 06:41:43.487392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.381 ms 00:17:51.634 [2024-11-19 06:41:43.487411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.634 [2024-11-19 06:41:43.495286] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:51.634 [2024-11-19 06:41:43.506622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.634 [2024-11-19 06:41:43.506724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:51.634 [2024-11-19 06:41:43.506764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.142 ms 00:17:51.634 [2024-11-19 06:41:43.506782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.635 [2024-11-19 06:41:43.506859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.635 [2024-11-19 06:41:43.506881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:51.635 [2024-11-19 06:41:43.506898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:51.635 [2024-11-19 06:41:43.506913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.635 [2024-11-19 06:41:43.506982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.635 [2024-11-19 06:41:43.507088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:51.635 [2024-11-19 06:41:43.507108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:51.635 [2024-11-19 06:41:43.507127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.635 [2024-11-19 06:41:43.507154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.635 [2024-11-19 06:41:43.507170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:51.635 [2024-11-19 06:41:43.507230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:51.635 [2024-11-19 06:41:43.507251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.635 [2024-11-19 06:41:43.507287] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:51.635 [2024-11-19 06:41:43.507307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.635 [2024-11-19 06:41:43.507324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:51.635 [2024-11-19 06:41:43.507340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:51.635 [2024-11-19 06:41:43.507353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.635 [2024-11-19 06:41:43.525500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.635 [2024-11-19 06:41:43.525591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:51.635 [2024-11-19 06:41:43.525635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.118 ms 00:17:51.635 [2024-11-19 06:41:43.525652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.635 [2024-11-19 06:41:43.525959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.635 [2024-11-19 06:41:43.526004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:51.635 [2024-11-19 06:41:43.526070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:51.635 [2024-11-19 06:41:43.526089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.635 [2024-11-19 06:41:43.526743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.635 [2024-11-19 06:41:43.529144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.843 ms, result 0 00:17:51.635 [2024-11-19 06:41:43.530830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.635 Some configs were skipped because the RPC state that can call them passed over. 00:17:51.893 06:41:43 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:51.893 [2024-11-19 06:41:43.759269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.893 [2024-11-19 06:41:43.759364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:51.893 [2024-11-19 06:41:43.759421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.624 ms 00:17:51.893 [2024-11-19 06:41:43.759455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.893 [2024-11-19 06:41:43.759560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.913 ms, result 0 00:17:51.893 true 00:17:51.893 06:41:43 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:52.153 [2024-11-19 06:41:43.958849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.153 [2024-11-19 06:41:43.958945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:52.153 [2024-11-19 06:41:43.958960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:17:52.153 [2024-11-19 06:41:43.958966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.153 [2024-11-19 06:41:43.958994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.145 ms, result 0 00:17:52.153 true 00:17:52.153 06:41:43 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74056 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74056 ']' 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74056 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74056 00:17:52.153 killing process with pid 74056 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74056' 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74056 00:17:52.153 06:41:43 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74056 00:17:52.721 [2024-11-19 06:41:44.533206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.721 [2024-11-19 06:41:44.533254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:52.721 [2024-11-19 06:41:44.533264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.721 [2024-11-19 06:41:44.533272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.721 [2024-11-19 06:41:44.533290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:52.721 [2024-11-19 06:41:44.535357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.721 [2024-11-19 06:41:44.535380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:52.722 [2024-11-19 06:41:44.535391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.053 ms 00:17:52.722 [2024-11-19 06:41:44.535397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.535638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.535647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:52.722 [2024-11-19 06:41:44.535654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:17:52.722 [2024-11-19 06:41:44.535660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.538763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.538786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:52.722 [2024-11-19 06:41:44.538796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:17:52.722 [2024-11-19 06:41:44.538802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.544041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.544144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:52.722 [2024-11-19 06:41:44.544160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.211 ms 00:17:52.722 [2024-11-19 06:41:44.544166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.551677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.551768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:52.722 [2024-11-19 06:41:44.551783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.453 ms 00:17:52.722 [2024-11-19 06:41:44.551794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.558719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.558815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:52.722 [2024-11-19 06:41:44.558829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.895 ms 00:17:52.722 [2024-11-19 06:41:44.558835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.558952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.558961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:52.722 [2024-11-19 06:41:44.558969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:52.722 [2024-11-19 06:41:44.558975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.567184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.567208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:52.722 [2024-11-19 06:41:44.567217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.194 ms 00:17:52.722 [2024-11-19 06:41:44.567222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.574911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.574944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:52.722 [2024-11-19 06:41:44.574954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.660 ms 00:17:52.722 [2024-11-19 06:41:44.574960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.582738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.582762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:52.722 [2024-11-19 06:41:44.582772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.740 ms 00:17:52.722 [2024-11-19 06:41:44.582778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.589991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.722 [2024-11-19 06:41:44.590013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:52.722 [2024-11-19 06:41:44.590022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.165 ms 00:17:52.722 [2024-11-19 06:41:44.590027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.722 [2024-11-19 06:41:44.590053] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:52.722 [2024-11-19 06:41:44.590064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:52.722 [2024-11-19 06:41:44.590307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:52.723 [2024-11-19 06:41:44.590711] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:52.723 [2024-11-19 06:41:44.590720] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef7e4a2-b0d9-4872-9175-6abd80a5f735 00:17:52.723 [2024-11-19 06:41:44.590731] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:52.723 [2024-11-19 06:41:44.590738] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:52.723 [2024-11-19 06:41:44.590743] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:52.723 [2024-11-19 06:41:44.590750] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:52.723 [2024-11-19 06:41:44.590755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:52.723 [2024-11-19 06:41:44.590762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:52.723 [2024-11-19 06:41:44.590768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:52.723 [2024-11-19 06:41:44.590774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:52.723 [2024-11-19 06:41:44.590779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:52.724 [2024-11-19 06:41:44.590785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.724 [2024-11-19 06:41:44.590791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:52.724 [2024-11-19 06:41:44.590798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:17:52.724 [2024-11-19 06:41:44.590804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.724 [2024-11-19 06:41:44.600329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.724 [2024-11-19 06:41:44.600351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:52.724 [2024-11-19 06:41:44.600361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.509 ms 00:17:52.724 [2024-11-19 06:41:44.600367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.724 [2024-11-19 06:41:44.600652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.724 [2024-11-19 06:41:44.600659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:52.724 [2024-11-19 06:41:44.600668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:17:52.724 [2024-11-19 06:41:44.600674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.724 [2024-11-19 06:41:44.635347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.724 [2024-11-19 06:41:44.635371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.724 [2024-11-19 06:41:44.635381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.724 [2024-11-19 06:41:44.635387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.724 [2024-11-19 06:41:44.635467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.724 [2024-11-19 06:41:44.635475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.724 [2024-11-19 06:41:44.635484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.724 [2024-11-19 06:41:44.635490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.724 [2024-11-19 06:41:44.635525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.724 [2024-11-19 06:41:44.635532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.724 [2024-11-19 06:41:44.635541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.724 [2024-11-19 06:41:44.635547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.724 [2024-11-19 06:41:44.635561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.724 [2024-11-19 06:41:44.635567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.724 [2024-11-19 06:41:44.635574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.724 [2024-11-19 06:41:44.635581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.982 [2024-11-19 06:41:44.695282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.982 [2024-11-19 06:41:44.695314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.982 [2024-11-19 06:41:44.695323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.982 [2024-11-19 06:41:44.695329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.982 [2024-11-19 06:41:44.744024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.982 [2024-11-19 06:41:44.744053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.982 [2024-11-19 06:41:44.744063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.982 [2024-11-19 06:41:44.744071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.982 [2024-11-19 06:41:44.744130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.982 [2024-11-19 06:41:44.744138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.982 [2024-11-19 06:41:44.744147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.982 [2024-11-19 06:41:44.744153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.982 [2024-11-19 06:41:44.744176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.982 [2024-11-19 06:41:44.744182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.982 [2024-11-19 06:41:44.744189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.982 [2024-11-19 06:41:44.744195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.982 [2024-11-19 06:41:44.744263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.982 [2024-11-19 06:41:44.744271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.982 [2024-11-19 06:41:44.744278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.982 [2024-11-19 06:41:44.744284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.982 [2024-11-19 06:41:44.744311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.982 [2024-11-19 06:41:44.744317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.982 [2024-11-19 06:41:44.744324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.982 [2024-11-19 06:41:44.744330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.982 [2024-11-19 06:41:44.744362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.982 [2024-11-19 06:41:44.744368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.982 [2024-11-19 06:41:44.744377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.982 [2024-11-19 06:41:44.744382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.982 [2024-11-19 06:41:44.744418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.982 [2024-11-19 06:41:44.744425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.983 [2024-11-19 06:41:44.744433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.983 [2024-11-19 06:41:44.744439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.983 [2024-11-19 06:41:44.744542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 211.320 ms, result 0 00:17:53.550 06:41:45 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:53.550 06:41:45 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:53.550 [2024-11-19 06:41:45.324372] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:53.550 [2024-11-19 06:41:45.324850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74103 ] 00:17:53.808 [2024-11-19 06:41:45.482111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.808 [2024-11-19 06:41:45.563014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.067 [2024-11-19 06:41:45.767305] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.067 [2024-11-19 06:41:45.767353] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.067 [2024-11-19 06:41:45.919207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.067 [2024-11-19 06:41:45.919242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.067 [2024-11-19 06:41:45.919252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.067 [2024-11-19 06:41:45.919258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.067 [2024-11-19 06:41:45.921335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.067 [2024-11-19 06:41:45.921455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.067 [2024-11-19 06:41:45.921468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:17:54.067 [2024-11-19 06:41:45.921475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.067 [2024-11-19 06:41:45.921526] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.067 [2024-11-19 06:41:45.922082] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.067 [2024-11-19 06:41:45.922100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.067 [2024-11-19 06:41:45.922106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.068 [2024-11-19 06:41:45.922113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:17:54.068 [2024-11-19 06:41:45.922119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.923382] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:54.068 [2024-11-19 06:41:45.933010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.068 [2024-11-19 06:41:45.933038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:54.068 [2024-11-19 06:41:45.933047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.629 ms 00:17:54.068 [2024-11-19 06:41:45.933053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.933120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.068 [2024-11-19 06:41:45.933129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:54.068 [2024-11-19 06:41:45.933135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:54.068 [2024-11-19 06:41:45.933141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.937380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.068 [2024-11-19 06:41:45.937403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.068 [2024-11-19 06:41:45.937410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.209 ms 00:17:54.068 [2024-11-19 06:41:45.937416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.937485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.068 [2024-11-19 06:41:45.937492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.068 [2024-11-19 06:41:45.937499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:54.068 [2024-11-19 06:41:45.937508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.937524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.068 [2024-11-19 06:41:45.937532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.068 [2024-11-19 06:41:45.937539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.068 [2024-11-19 06:41:45.937544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.937561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:54.068 [2024-11-19 06:41:45.940207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.068 [2024-11-19 06:41:45.940340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.068 [2024-11-19 06:41:45.940352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:17:54.068 [2024-11-19 06:41:45.940359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.940388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.068 [2024-11-19 06:41:45.940394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.068 [2024-11-19 06:41:45.940401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.068 [2024-11-19 06:41:45.940407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.940420] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:54.068 [2024-11-19 06:41:45.940437] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:54.068 [2024-11-19 06:41:45.940464] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:54.068 [2024-11-19 06:41:45.940475] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:54.068 [2024-11-19 06:41:45.940553] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.068 [2024-11-19 06:41:45.940562] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.068 [2024-11-19 06:41:45.940570] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:54.068 [2024-11-19 06:41:45.940577] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.068 [2024-11-19 06:41:45.940586] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.068 [2024-11-19 06:41:45.940592] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:54.068 [2024-11-19 06:41:45.940598] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.068 [2024-11-19 06:41:45.940604] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.068 [2024-11-19 06:41:45.940609] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.068 [2024-11-19 06:41:45.940615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.068 [2024-11-19 06:41:45.940620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.068 [2024-11-19 06:41:45.940626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:54.068 [2024-11-19 06:41:45.940632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.940697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.068 [2024-11-19 06:41:45.940704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.068 [2024-11-19 06:41:45.940712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:54.068 [2024-11-19 06:41:45.940717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.068 [2024-11-19 06:41:45.940805] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.068 [2024-11-19 06:41:45.940813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.068 [2024-11-19 06:41:45.940819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.068 [2024-11-19 06:41:45.940824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.068 [2024-11-19 06:41:45.940836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:54.068 [2024-11-19 06:41:45.940846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.068 [2024-11-19 06:41:45.940852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.068 [2024-11-19 06:41:45.940862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.068 [2024-11-19 06:41:45.940867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:54.068 [2024-11-19 06:41:45.940872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.068 [2024-11-19 06:41:45.940881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.068 [2024-11-19 06:41:45.940886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:54.068 [2024-11-19 06:41:45.940891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.068 [2024-11-19 06:41:45.940902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:54.068 [2024-11-19 06:41:45.940907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.068 [2024-11-19 06:41:45.940917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.068 [2024-11-19 06:41:45.940941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.068 [2024-11-19 06:41:45.940946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.068 [2024-11-19 06:41:45.940956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.068 [2024-11-19 06:41:45.940961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.068 [2024-11-19 06:41:45.940971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.068 [2024-11-19 06:41:45.940976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.068 [2024-11-19 06:41:45.940986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.068 [2024-11-19 06:41:45.940991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:54.068 [2024-11-19 06:41:45.940996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.068 [2024-11-19 06:41:45.941001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.068 [2024-11-19 06:41:45.941007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:54.068 [2024-11-19 06:41:45.941012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.068 [2024-11-19 06:41:45.941017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.068 [2024-11-19 06:41:45.941022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:54.069 [2024-11-19 06:41:45.941027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.069 [2024-11-19 06:41:45.941032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.069 [2024-11-19 06:41:45.941036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:54.069 [2024-11-19 06:41:45.941041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.069 [2024-11-19 06:41:45.941047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.069 [2024-11-19 06:41:45.941054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.069 [2024-11-19 06:41:45.941059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.069 [2024-11-19 06:41:45.941066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.069 [2024-11-19 06:41:45.941072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.069 [2024-11-19 06:41:45.941077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.069 [2024-11-19 06:41:45.941082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.069 [2024-11-19 06:41:45.941088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.069 [2024-11-19 06:41:45.941093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.069 [2024-11-19 06:41:45.941098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.069 [2024-11-19 06:41:45.941104] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.069 [2024-11-19 06:41:45.941112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.069 [2024-11-19 06:41:45.941118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:54.069 [2024-11-19 06:41:45.941123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:54.069 [2024-11-19 06:41:45.941128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:54.069 [2024-11-19 06:41:45.941134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:54.069 [2024-11-19 06:41:45.941139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:54.069 [2024-11-19 06:41:45.941145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:54.069 [2024-11-19 06:41:45.941150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:54.069 [2024-11-19 06:41:45.941155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:54.069 [2024-11-19 06:41:45.941160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:54.069 [2024-11-19 06:41:45.941165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:54.069 [2024-11-19 06:41:45.941171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:54.069 [2024-11-19 06:41:45.941176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:54.069 [2024-11-19 06:41:45.941181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:54.069 [2024-11-19 06:41:45.941186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:54.069 [2024-11-19 06:41:45.941191] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.069 [2024-11-19 06:41:45.941197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.069 [2024-11-19 06:41:45.941204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.069 [2024-11-19 06:41:45.941209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.069 [2024-11-19 06:41:45.941214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.069 [2024-11-19 06:41:45.941220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.069 [2024-11-19 06:41:45.941225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.069 [2024-11-19 06:41:45.941231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.069 [2024-11-19 06:41:45.941239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:17:54.069 [2024-11-19 06:41:45.941250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.069 [2024-11-19 06:41:45.961880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.069 [2024-11-19 06:41:45.961908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.069 [2024-11-19 06:41:45.961917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.593 ms 00:17:54.069 [2024-11-19 06:41:45.961933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.069 [2024-11-19 06:41:45.962026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.069 [2024-11-19 06:41:45.962037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.069 [2024-11-19 06:41:45.962044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:54.069 [2024-11-19 06:41:45.962049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.015828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.015859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.329 [2024-11-19 06:41:46.015868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.762 ms 00:17:54.329 [2024-11-19 06:41:46.015877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.015960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.015970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.329 [2024-11-19 06:41:46.015977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:54.329 [2024-11-19 06:41:46.015982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.016261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.016273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.329 [2024-11-19 06:41:46.016280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:54.329 [2024-11-19 06:41:46.016291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.016392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.016400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.329 [2024-11-19 06:41:46.016406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:54.329 [2024-11-19 06:41:46.016412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.027173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.027197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.329 [2024-11-19 06:41:46.027205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.746 ms 00:17:54.329 [2024-11-19 06:41:46.027211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.037415] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:54.329 [2024-11-19 06:41:46.037441] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.329 [2024-11-19 06:41:46.037451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.037457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.329 [2024-11-19 06:41:46.037464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.165 ms 00:17:54.329 [2024-11-19 06:41:46.037470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.056122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.056154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.329 [2024-11-19 06:41:46.056163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.597 ms 00:17:54.329 [2024-11-19 06:41:46.056169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.065154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.065255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.329 [2024-11-19 06:41:46.065266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.933 ms 00:17:54.329 [2024-11-19 06:41:46.065272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.073889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.073912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.329 [2024-11-19 06:41:46.073920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.580 ms 00:17:54.329 [2024-11-19 06:41:46.073931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.074379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.074399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.329 [2024-11-19 06:41:46.074407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:17:54.329 [2024-11-19 06:41:46.074412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.118660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.118690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.329 [2024-11-19 06:41:46.118700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.231 ms 00:17:54.329 [2024-11-19 06:41:46.118706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.126391] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.329 [2024-11-19 06:41:46.137641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.137667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.329 [2024-11-19 06:41:46.137677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.869 ms 00:17:54.329 [2024-11-19 06:41:46.137683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.137740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.137748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.329 [2024-11-19 06:41:46.137755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.329 [2024-11-19 06:41:46.137760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.137796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.137803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.329 [2024-11-19 06:41:46.137809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:54.329 [2024-11-19 06:41:46.137815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.137836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.137845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.329 [2024-11-19 06:41:46.137852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.329 [2024-11-19 06:41:46.137857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.137881] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.329 [2024-11-19 06:41:46.137888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.137894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.329 [2024-11-19 06:41:46.137900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:54.329 [2024-11-19 06:41:46.137905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.156041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.156150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.329 [2024-11-19 06:41:46.156163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.122 ms 00:17:54.329 [2024-11-19 06:41:46.156169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.156238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.329 [2024-11-19 06:41:46.156247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.329 [2024-11-19 06:41:46.156253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:54.329 [2024-11-19 06:41:46.156259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.329 [2024-11-19 06:41:46.156864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.329 [2024-11-19 06:41:46.159153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 237.439 ms, result 0 00:17:54.329 [2024-11-19 06:41:46.160091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.329 [2024-11-19 06:41:46.174720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.274  [2024-11-19T06:41:48.589Z] Copying: 20/256 [MB] (20 MBps) [2024-11-19T06:41:49.530Z] Copying: 37/256 [MB] (16 MBps) [2024-11-19T06:41:50.475Z] Copying: 53/256 [MB] (16 MBps) [2024-11-19T06:41:51.421Z] Copying: 69/256 [MB] (16 MBps) [2024-11-19T06:41:52.367Z] Copying: 80/256 [MB] (11 MBps) [2024-11-19T06:41:53.313Z] Copying: 95/256 [MB] (14 MBps) [2024-11-19T06:41:54.259Z] Copying: 106/256 [MB] (10 MBps) [2024-11-19T06:41:55.204Z] Copying: 119/256 [MB] (13 MBps) [2024-11-19T06:41:56.594Z] Copying: 137/256 [MB] (17 MBps) [2024-11-19T06:41:57.539Z] Copying: 152/256 [MB] (15 MBps) [2024-11-19T06:41:58.484Z] Copying: 164/256 [MB] (11 MBps) [2024-11-19T06:41:59.439Z] Copying: 179/256 [MB] (15 MBps) [2024-11-19T06:42:00.376Z] Copying: 196/256 [MB] (17 MBps) [2024-11-19T06:42:01.322Z] Copying: 231/256 [MB] (34 MBps) [2024-11-19T06:42:02.267Z] Copying: 246/256 [MB] (15 MBps) [2024-11-19T06:42:02.267Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-19 06:42:02.048051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.338 [2024-11-19 06:42:02.058560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.058750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.338 [2024-11-19 06:42:02.058964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:10.338 [2024-11-19 06:42:02.059073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.059130] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:10.338 [2024-11-19 06:42:02.062133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.062289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.338 [2024-11-19 06:42:02.062362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:18:10.338 [2024-11-19 06:42:02.062385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.062671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.062698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.338 [2024-11-19 06:42:02.062719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:18:10.338 [2024-11-19 06:42:02.063204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.066980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.067118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:10.338 [2024-11-19 06:42:02.067184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:18:10.338 [2024-11-19 06:42:02.067207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.074114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.074257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:10.338 [2024-11-19 06:42:02.074314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.866 ms 00:18:10.338 [2024-11-19 06:42:02.074336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.100417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.100593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:10.338 [2024-11-19 06:42:02.100650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.997 ms 00:18:10.338 [2024-11-19 06:42:02.100671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.116776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.116979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:10.338 [2024-11-19 06:42:02.117044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.041 ms 00:18:10.338 [2024-11-19 06:42:02.117084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.117253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.117281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:10.338 [2024-11-19 06:42:02.117302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:10.338 [2024-11-19 06:42:02.117321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.143224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.143395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:10.338 [2024-11-19 06:42:02.143475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.863 ms 00:18:10.338 [2024-11-19 06:42:02.143498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.168846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.169042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:10.338 [2024-11-19 06:42:02.169102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.284 ms 00:18:10.338 [2024-11-19 06:42:02.169134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.193959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.338 [2024-11-19 06:42:02.194123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:10.338 [2024-11-19 06:42:02.194179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.750 ms 00:18:10.338 [2024-11-19 06:42:02.194200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.338 [2024-11-19 06:42:02.218990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.339 [2024-11-19 06:42:02.219155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:10.339 [2024-11-19 06:42:02.219214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.710 ms 00:18:10.339 [2024-11-19 06:42:02.219224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.339 [2024-11-19 06:42:02.219796] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:10.339 [2024-11-19 06:42:02.219854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.219999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:10.339 [2024-11-19 06:42:02.220535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:10.340 [2024-11-19 06:42:02.220675] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:10.340 [2024-11-19 06:42:02.220685] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef7e4a2-b0d9-4872-9175-6abd80a5f735 00:18:10.340 [2024-11-19 06:42:02.220694] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:10.340 [2024-11-19 06:42:02.220703] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:10.340 [2024-11-19 06:42:02.220710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:10.340 [2024-11-19 06:42:02.220720] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:10.340 [2024-11-19 06:42:02.220728] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:10.340 [2024-11-19 06:42:02.220736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:10.340 [2024-11-19 06:42:02.220743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:10.340 [2024-11-19 06:42:02.220750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:10.340 [2024-11-19 06:42:02.220756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:10.340 [2024-11-19 06:42:02.220765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.340 [2024-11-19 06:42:02.220777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:10.340 [2024-11-19 06:42:02.220787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:18:10.340 [2024-11-19 06:42:02.220795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.340 [2024-11-19 06:42:02.234536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.340 [2024-11-19 06:42:02.234580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:10.340 [2024-11-19 06:42:02.234592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.697 ms 00:18:10.340 [2024-11-19 06:42:02.234600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.340 [2024-11-19 06:42:02.235040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.340 [2024-11-19 06:42:02.235066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:10.340 [2024-11-19 06:42:02.235077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:18:10.340 [2024-11-19 06:42:02.235085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.274335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.274390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.601 [2024-11-19 06:42:02.274403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.274412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.274507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.274518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.601 [2024-11-19 06:42:02.274528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.274536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.274589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.274600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.601 [2024-11-19 06:42:02.274610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.274619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.274638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.274651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.601 [2024-11-19 06:42:02.274659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.274669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.358726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.358784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.601 [2024-11-19 06:42:02.358797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.358805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.428412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.428471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.601 [2024-11-19 06:42:02.428484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.428493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.428552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.428562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.601 [2024-11-19 06:42:02.428571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.428579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.428611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.428621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.601 [2024-11-19 06:42:02.428633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.428641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.428740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.428751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.601 [2024-11-19 06:42:02.428760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.428768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.428802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.428811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.601 [2024-11-19 06:42:02.428821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.428832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.428876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.428886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.601 [2024-11-19 06:42:02.428895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.601 [2024-11-19 06:42:02.428903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.601 [2024-11-19 06:42:02.428979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.601 [2024-11-19 06:42:02.428991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.601 [2024-11-19 06:42:02.429004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.602 [2024-11-19 06:42:02.429012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.602 [2024-11-19 06:42:02.429185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.596 ms, result 0 00:18:11.543 00:18:11.543 00:18:11.543 06:42:03 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:11.543 06:42:03 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:12.113 06:42:03 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:12.113 [2024-11-19 06:42:03.838392] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:12.113 [2024-11-19 06:42:03.838539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74296 ] 00:18:12.113 [2024-11-19 06:42:04.002394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.373 [2024-11-19 06:42:04.124054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.634 [2024-11-19 06:42:04.412887] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.634 [2024-11-19 06:42:04.412991] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.896 [2024-11-19 06:42:04.575389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-19 06:42:04.575646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:12.896 [2024-11-19 06:42:04.575673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:12.896 [2024-11-19 06:42:04.575683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-19 06:42:04.578656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-19 06:42:04.578863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.896 [2024-11-19 06:42:04.578885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:18:12.896 [2024-11-19 06:42:04.578894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-19 06:42:04.579153] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:12.896 [2024-11-19 06:42:04.580009] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:12.896 [2024-11-19 06:42:04.580047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-19 06:42:04.580057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.896 [2024-11-19 06:42:04.580068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:18:12.896 [2024-11-19 06:42:04.580078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-19 06:42:04.581805] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:12.896 [2024-11-19 06:42:04.596530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-19 06:42:04.596589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:12.896 [2024-11-19 06:42:04.596603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.727 ms 00:18:12.896 [2024-11-19 06:42:04.596611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-19 06:42:04.596732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-19 06:42:04.596745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:12.896 [2024-11-19 06:42:04.596755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:12.896 [2024-11-19 06:42:04.596763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-19 06:42:04.604891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-19 06:42:04.604953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.896 [2024-11-19 06:42:04.604965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.082 ms 00:18:12.896 [2024-11-19 06:42:04.604973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-19 06:42:04.605084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-19 06:42:04.605094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.896 [2024-11-19 06:42:04.605104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:12.896 [2024-11-19 06:42:04.605112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-19 06:42:04.605138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-19 06:42:04.605149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:12.896 [2024-11-19 06:42:04.605158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:12.896 [2024-11-19 06:42:04.605165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-19 06:42:04.605187] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:12.896 [2024-11-19 06:42:04.609390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-19 06:42:04.609580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.896 [2024-11-19 06:42:04.609601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.207 ms 00:18:12.897 [2024-11-19 06:42:04.609610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.897 [2024-11-19 06:42:04.609692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.897 [2024-11-19 06:42:04.609703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:12.897 [2024-11-19 06:42:04.609713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:12.897 [2024-11-19 06:42:04.609720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.897 [2024-11-19 06:42:04.609742] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:12.897 [2024-11-19 06:42:04.609771] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:12.897 [2024-11-19 06:42:04.609807] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:12.897 [2024-11-19 06:42:04.609824] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:12.897 [2024-11-19 06:42:04.609950] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:12.897 [2024-11-19 06:42:04.609963] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:12.897 [2024-11-19 06:42:04.609974] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:12.897 [2024-11-19 06:42:04.609986] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:12.897 [2024-11-19 06:42:04.609999] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610007] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:12.897 [2024-11-19 06:42:04.610015] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:12.897 [2024-11-19 06:42:04.610022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:12.897 [2024-11-19 06:42:04.610031] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:12.897 [2024-11-19 06:42:04.610039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.897 [2024-11-19 06:42:04.610047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:12.897 [2024-11-19 06:42:04.610055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:18:12.897 [2024-11-19 06:42:04.610063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.897 [2024-11-19 06:42:04.610151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.897 [2024-11-19 06:42:04.610161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:12.897 [2024-11-19 06:42:04.610173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:12.897 [2024-11-19 06:42:04.610180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.897 [2024-11-19 06:42:04.610285] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:12.897 [2024-11-19 06:42:04.610296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:12.897 [2024-11-19 06:42:04.610305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:12.897 [2024-11-19 06:42:04.610331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:12.897 [2024-11-19 06:42:04.610354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.897 [2024-11-19 06:42:04.610367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:12.897 [2024-11-19 06:42:04.610374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:12.897 [2024-11-19 06:42:04.610382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.897 [2024-11-19 06:42:04.610398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:12.897 [2024-11-19 06:42:04.610405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:12.897 [2024-11-19 06:42:04.610414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:12.897 [2024-11-19 06:42:04.610429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:12.897 [2024-11-19 06:42:04.610450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:12.897 [2024-11-19 06:42:04.610470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:12.897 [2024-11-19 06:42:04.610490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:12.897 [2024-11-19 06:42:04.610510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:12.897 [2024-11-19 06:42:04.610532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.897 [2024-11-19 06:42:04.610545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:12.897 [2024-11-19 06:42:04.610552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:12.897 [2024-11-19 06:42:04.610559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.897 [2024-11-19 06:42:04.610566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:12.897 [2024-11-19 06:42:04.610574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:12.897 [2024-11-19 06:42:04.610580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:12.897 [2024-11-19 06:42:04.610595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:12.897 [2024-11-19 06:42:04.610602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610609] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:12.897 [2024-11-19 06:42:04.610618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:12.897 [2024-11-19 06:42:04.610626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.897 [2024-11-19 06:42:04.610646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:12.897 [2024-11-19 06:42:04.610654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:12.897 [2024-11-19 06:42:04.610660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:12.897 [2024-11-19 06:42:04.610668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:12.897 [2024-11-19 06:42:04.610676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:12.897 [2024-11-19 06:42:04.610683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:12.897 [2024-11-19 06:42:04.610692] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:12.897 [2024-11-19 06:42:04.610701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.897 [2024-11-19 06:42:04.610710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:12.897 [2024-11-19 06:42:04.610717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:12.897 [2024-11-19 06:42:04.610724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:12.897 [2024-11-19 06:42:04.610731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:12.897 [2024-11-19 06:42:04.610739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:12.897 [2024-11-19 06:42:04.610746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:12.897 [2024-11-19 06:42:04.610753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:12.897 [2024-11-19 06:42:04.610760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:12.897 [2024-11-19 06:42:04.610767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:12.898 [2024-11-19 06:42:04.610775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:12.898 [2024-11-19 06:42:04.610781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:12.898 [2024-11-19 06:42:04.610788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:12.898 [2024-11-19 06:42:04.610795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:12.898 [2024-11-19 06:42:04.610802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:12.898 [2024-11-19 06:42:04.610809] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:12.898 [2024-11-19 06:42:04.610817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.898 [2024-11-19 06:42:04.610825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:12.898 [2024-11-19 06:42:04.610833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:12.898 [2024-11-19 06:42:04.610840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:12.898 [2024-11-19 06:42:04.610847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:12.898 [2024-11-19 06:42:04.610854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.610862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:12.898 [2024-11-19 06:42:04.610872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:18:12.898 [2024-11-19 06:42:04.610880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.642818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.642872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.898 [2024-11-19 06:42:04.642884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.868 ms 00:18:12.898 [2024-11-19 06:42:04.642892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.643051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.643069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:12.898 [2024-11-19 06:42:04.643078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:12.898 [2024-11-19 06:42:04.643086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.687585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.687644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.898 [2024-11-19 06:42:04.687657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.475 ms 00:18:12.898 [2024-11-19 06:42:04.687670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.687782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.687794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.898 [2024-11-19 06:42:04.687804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:12.898 [2024-11-19 06:42:04.687812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.688387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.688424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.898 [2024-11-19 06:42:04.688435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:18:12.898 [2024-11-19 06:42:04.688453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.688607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.688627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.898 [2024-11-19 06:42:04.688636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:12.898 [2024-11-19 06:42:04.688645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.704967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.705014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.898 [2024-11-19 06:42:04.705026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.299 ms 00:18:12.898 [2024-11-19 06:42:04.705034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.719470] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:12.898 [2024-11-19 06:42:04.719668] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:12.898 [2024-11-19 06:42:04.719689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.719698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:12.898 [2024-11-19 06:42:04.719708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.543 ms 00:18:12.898 [2024-11-19 06:42:04.719715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.746174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.746384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:12.898 [2024-11-19 06:42:04.746408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.040 ms 00:18:12.898 [2024-11-19 06:42:04.746417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.759326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.759374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:12.898 [2024-11-19 06:42:04.759388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.812 ms 00:18:12.898 [2024-11-19 06:42:04.759396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.771853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.771901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:12.898 [2024-11-19 06:42:04.771913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.354 ms 00:18:12.898 [2024-11-19 06:42:04.771921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.898 [2024-11-19 06:42:04.772600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.898 [2024-11-19 06:42:04.772636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:12.898 [2024-11-19 06:42:04.772648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:18:12.898 [2024-11-19 06:42:04.772656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.160 [2024-11-19 06:42:04.837584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.160 [2024-11-19 06:42:04.837659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:13.160 [2024-11-19 06:42:04.837675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.899 ms 00:18:13.160 [2024-11-19 06:42:04.837684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.160 [2024-11-19 06:42:04.849117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:13.160 [2024-11-19 06:42:04.868293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.160 [2024-11-19 06:42:04.868525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.160 [2024-11-19 06:42:04.868547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.501 ms 00:18:13.160 [2024-11-19 06:42:04.868557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.160 [2024-11-19 06:42:04.868665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.160 [2024-11-19 06:42:04.868678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:13.160 [2024-11-19 06:42:04.868689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:13.160 [2024-11-19 06:42:04.868697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.160 [2024-11-19 06:42:04.868759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.160 [2024-11-19 06:42:04.868769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.160 [2024-11-19 06:42:04.868778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:13.160 [2024-11-19 06:42:04.868786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.160 [2024-11-19 06:42:04.868815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.160 [2024-11-19 06:42:04.868826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.160 [2024-11-19 06:42:04.868835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:13.160 [2024-11-19 06:42:04.868843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.160 [2024-11-19 06:42:04.868880] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:13.160 [2024-11-19 06:42:04.868892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.160 [2024-11-19 06:42:04.868900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:13.160 [2024-11-19 06:42:04.868909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:13.160 [2024-11-19 06:42:04.868917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.160 [2024-11-19 06:42:04.895439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.160 [2024-11-19 06:42:04.895661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.160 [2024-11-19 06:42:04.895685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.471 ms 00:18:13.160 [2024-11-19 06:42:04.895694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.160 [2024-11-19 06:42:04.896237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.160 [2024-11-19 06:42:04.896292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.160 [2024-11-19 06:42:04.896306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:13.160 [2024-11-19 06:42:04.896316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.160 [2024-11-19 06:42:04.897616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.160 [2024-11-19 06:42:04.901209] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.900 ms, result 0 00:18:13.160 [2024-11-19 06:42:04.902531] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.160 [2024-11-19 06:42:04.916363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.421  [2024-11-19T06:42:05.350Z] Copying: 4096/4096 [kB] (average 15 MBps)[2024-11-19 06:42:05.179489] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.421 [2024-11-19 06:42:05.188791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.421 [2024-11-19 06:42:05.188992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:13.421 [2024-11-19 06:42:05.189068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.421 [2024-11-19 06:42:05.189102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.421 [2024-11-19 06:42:05.189146] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:13.421 [2024-11-19 06:42:05.192060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.421 [2024-11-19 06:42:05.192207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:13.421 [2024-11-19 06:42:05.192273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:18:13.421 [2024-11-19 06:42:05.192298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.421 [2024-11-19 06:42:05.195557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.421 [2024-11-19 06:42:05.195702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:13.421 [2024-11-19 06:42:05.195762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:18:13.421 [2024-11-19 06:42:05.195785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.421 [2024-11-19 06:42:05.200185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.421 [2024-11-19 06:42:05.200328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:13.421 [2024-11-19 06:42:05.200386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.367 ms 00:18:13.421 [2024-11-19 06:42:05.200408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.421 [2024-11-19 06:42:05.207364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.421 [2024-11-19 06:42:05.207523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:13.421 [2024-11-19 06:42:05.207749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.909 ms 00:18:13.421 [2024-11-19 06:42:05.207772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.421 [2024-11-19 06:42:05.232871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.421 [2024-11-19 06:42:05.233050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:13.421 [2024-11-19 06:42:05.233115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.036 ms 00:18:13.421 [2024-11-19 06:42:05.233137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.421 [2024-11-19 06:42:05.249242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.421 [2024-11-19 06:42:05.249409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:13.421 [2024-11-19 06:42:05.249476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.966 ms 00:18:13.421 [2024-11-19 06:42:05.249498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.422 [2024-11-19 06:42:05.249648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.422 [2024-11-19 06:42:05.249675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:13.422 [2024-11-19 06:42:05.249694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:13.422 [2024-11-19 06:42:05.249714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.422 [2024-11-19 06:42:05.275704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.422 [2024-11-19 06:42:05.275861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:13.422 [2024-11-19 06:42:05.275917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.890 ms 00:18:13.422 [2024-11-19 06:42:05.275962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.422 [2024-11-19 06:42:05.300874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.422 [2024-11-19 06:42:05.301052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:13.422 [2024-11-19 06:42:05.301121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.824 ms 00:18:13.422 [2024-11-19 06:42:05.301146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.422 [2024-11-19 06:42:05.325916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.422 [2024-11-19 06:42:05.326085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:13.422 [2024-11-19 06:42:05.326142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.665 ms 00:18:13.422 [2024-11-19 06:42:05.326163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.422 [2024-11-19 06:42:05.351155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.422 [2024-11-19 06:42:05.351325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:13.422 [2024-11-19 06:42:05.351383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.860 ms 00:18:13.422 [2024-11-19 06:42:05.351404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.422 [2024-11-19 06:42:05.351634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:13.422 [2024-11-19 06:42:05.351669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.351701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.351731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.351820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.351850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.351878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.351907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.351952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.352019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.352067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.352097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.352135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:13.422 [2024-11-19 06:42:05.352183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-19 06:42:05.352384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.352915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.353149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.353195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-19 06:42:05.353233] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:13.685 [2024-11-19 06:42:05.353255] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef7e4a2-b0d9-4872-9175-6abd80a5f735 00:18:13.685 [2024-11-19 06:42:05.353284] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:13.685 [2024-11-19 06:42:05.353303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:13.685 [2024-11-19 06:42:05.353321] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:13.685 [2024-11-19 06:42:05.353341] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:13.685 [2024-11-19 06:42:05.353359] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:13.685 [2024-11-19 06:42:05.353378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:13.685 [2024-11-19 06:42:05.353397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:13.685 [2024-11-19 06:42:05.353416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:13.685 [2024-11-19 06:42:05.353433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:13.685 [2024-11-19 06:42:05.353453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.685 [2024-11-19 06:42:05.353479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:13.685 [2024-11-19 06:42:05.353499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.820 ms 00:18:13.685 [2024-11-19 06:42:05.353518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.685 [2024-11-19 06:42:05.366920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.685 [2024-11-19 06:42:05.367106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:13.685 [2024-11-19 06:42:05.367162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.363 ms 00:18:13.685 [2024-11-19 06:42:05.367185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.685 [2024-11-19 06:42:05.367623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.685 [2024-11-19 06:42:05.367672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:13.685 [2024-11-19 06:42:05.367742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:18:13.686 [2024-11-19 06:42:05.367765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.406352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.406516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.686 [2024-11-19 06:42:05.406535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.406544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.406650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.406661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.686 [2024-11-19 06:42:05.406670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.406678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.406728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.406739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.686 [2024-11-19 06:42:05.406747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.406756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.406773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.406784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.686 [2024-11-19 06:42:05.406792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.406799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.490591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.490642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.686 [2024-11-19 06:42:05.490656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.490664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.560475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.560671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.686 [2024-11-19 06:42:05.560690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.560699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.560780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.560791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.686 [2024-11-19 06:42:05.560800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.560808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.560841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.560851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.686 [2024-11-19 06:42:05.560867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.560876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.561008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.561020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.686 [2024-11-19 06:42:05.561030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.561039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.561073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.561083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:13.686 [2024-11-19 06:42:05.561091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.561103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.561146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.561156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.686 [2024-11-19 06:42:05.561165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.561173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.561222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.686 [2024-11-19 06:42:05.561233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.686 [2024-11-19 06:42:05.561245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-19 06:42:05.561254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.686 [2024-11-19 06:42:05.561410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.607 ms, result 0 00:18:14.629 00:18:14.629 00:18:14.629 06:42:06 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74327 00:18:14.629 06:42:06 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74327 00:18:14.629 06:42:06 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74327 ']' 00:18:14.629 06:42:06 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.629 06:42:06 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:14.629 06:42:06 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.629 06:42:06 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.629 06:42:06 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.629 06:42:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:14.629 [2024-11-19 06:42:06.411578] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:14.629 [2024-11-19 06:42:06.412264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74327 ] 00:18:14.891 [2024-11-19 06:42:06.576726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.891 [2024-11-19 06:42:06.695234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.462 06:42:07 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.462 06:42:07 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:15.462 06:42:07 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:15.762 [2024-11-19 06:42:07.603296] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:15.762 [2024-11-19 06:42:07.603373] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:16.028 [2024-11-19 06:42:07.782786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.028 [2024-11-19 06:42:07.782847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:16.028 [2024-11-19 06:42:07.782869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:16.028 [2024-11-19 06:42:07.782878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.028 [2024-11-19 06:42:07.785881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.028 [2024-11-19 06:42:07.786095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:16.028 [2024-11-19 06:42:07.786121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.981 ms 00:18:16.028 [2024-11-19 06:42:07.786130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.028 [2024-11-19 06:42:07.786332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:16.028 [2024-11-19 06:42:07.787194] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:16.028 [2024-11-19 06:42:07.787243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.028 [2024-11-19 06:42:07.787252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:16.028 [2024-11-19 06:42:07.787264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:18:16.028 [2024-11-19 06:42:07.787272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.028 [2024-11-19 06:42:07.789035] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:16.028 [2024-11-19 06:42:07.803367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.028 [2024-11-19 06:42:07.803422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:16.028 [2024-11-19 06:42:07.803436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.341 ms 00:18:16.028 [2024-11-19 06:42:07.803461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.028 [2024-11-19 06:42:07.803576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.028 [2024-11-19 06:42:07.803591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:16.028 [2024-11-19 06:42:07.803600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:16.028 [2024-11-19 06:42:07.803610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.028 [2024-11-19 06:42:07.811625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.028 [2024-11-19 06:42:07.811829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:16.028 [2024-11-19 06:42:07.811847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.961 ms 00:18:16.028 [2024-11-19 06:42:07.811857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.028 [2024-11-19 06:42:07.812003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.028 [2024-11-19 06:42:07.812018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:16.028 [2024-11-19 06:42:07.812027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:16.028 [2024-11-19 06:42:07.812036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.028 [2024-11-19 06:42:07.812087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.028 [2024-11-19 06:42:07.812098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:16.028 [2024-11-19 06:42:07.812106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:16.028 [2024-11-19 06:42:07.812116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.029 [2024-11-19 06:42:07.812141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:16.029 [2024-11-19 06:42:07.816270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.029 [2024-11-19 06:42:07.816308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:16.029 [2024-11-19 06:42:07.816321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.133 ms 00:18:16.029 [2024-11-19 06:42:07.816328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.029 [2024-11-19 06:42:07.816404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.029 [2024-11-19 06:42:07.816414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:16.029 [2024-11-19 06:42:07.816426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:16.029 [2024-11-19 06:42:07.816436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.029 [2024-11-19 06:42:07.816461] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:16.029 [2024-11-19 06:42:07.816482] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:16.029 [2024-11-19 06:42:07.816527] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:16.029 [2024-11-19 06:42:07.816543] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:16.029 [2024-11-19 06:42:07.816651] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:16.029 [2024-11-19 06:42:07.816661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:16.029 [2024-11-19 06:42:07.816676] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:16.029 [2024-11-19 06:42:07.816689] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:16.029 [2024-11-19 06:42:07.816701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:16.029 [2024-11-19 06:42:07.816710] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:16.029 [2024-11-19 06:42:07.816719] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:16.029 [2024-11-19 06:42:07.816727] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:16.029 [2024-11-19 06:42:07.816739] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:16.029 [2024-11-19 06:42:07.816747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.029 [2024-11-19 06:42:07.816757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:16.029 [2024-11-19 06:42:07.816765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:16.029 [2024-11-19 06:42:07.816774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.029 [2024-11-19 06:42:07.816863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.029 [2024-11-19 06:42:07.816874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:16.029 [2024-11-19 06:42:07.816881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:16.029 [2024-11-19 06:42:07.816891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.029 [2024-11-19 06:42:07.817016] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:16.029 [2024-11-19 06:42:07.817030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:16.029 [2024-11-19 06:42:07.817038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.029 [2024-11-19 06:42:07.817049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:16.029 [2024-11-19 06:42:07.817066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:16.029 [2024-11-19 06:42:07.817091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:16.029 [2024-11-19 06:42:07.817099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.029 [2024-11-19 06:42:07.817115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:16.029 [2024-11-19 06:42:07.817123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:16.029 [2024-11-19 06:42:07.817130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.029 [2024-11-19 06:42:07.817139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:16.029 [2024-11-19 06:42:07.817145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:16.029 [2024-11-19 06:42:07.817154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:16.029 [2024-11-19 06:42:07.817170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:16.029 [2024-11-19 06:42:07.817178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:16.029 [2024-11-19 06:42:07.817201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.029 [2024-11-19 06:42:07.817216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:16.029 [2024-11-19 06:42:07.817227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.029 [2024-11-19 06:42:07.817242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:16.029 [2024-11-19 06:42:07.817249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.029 [2024-11-19 06:42:07.817264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:16.029 [2024-11-19 06:42:07.817273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.029 [2024-11-19 06:42:07.817288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:16.029 [2024-11-19 06:42:07.817324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.029 [2024-11-19 06:42:07.817341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:16.029 [2024-11-19 06:42:07.817349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:16.029 [2024-11-19 06:42:07.817356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.029 [2024-11-19 06:42:07.817365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:16.029 [2024-11-19 06:42:07.817371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:16.029 [2024-11-19 06:42:07.817381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:16.029 [2024-11-19 06:42:07.817397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:16.029 [2024-11-19 06:42:07.817403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817411] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:16.029 [2024-11-19 06:42:07.817420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:16.029 [2024-11-19 06:42:07.817431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.029 [2024-11-19 06:42:07.817438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.029 [2024-11-19 06:42:07.817448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:16.029 [2024-11-19 06:42:07.817455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:16.029 [2024-11-19 06:42:07.817466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:16.029 [2024-11-19 06:42:07.817474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:16.029 [2024-11-19 06:42:07.817482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:16.029 [2024-11-19 06:42:07.817489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:16.029 [2024-11-19 06:42:07.817500] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:16.029 [2024-11-19 06:42:07.817509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.029 [2024-11-19 06:42:07.817522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:16.029 [2024-11-19 06:42:07.817529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:16.029 [2024-11-19 06:42:07.817539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:16.029 [2024-11-19 06:42:07.817547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:16.029 [2024-11-19 06:42:07.817556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:16.029 [2024-11-19 06:42:07.817563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:16.029 [2024-11-19 06:42:07.817572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:16.029 [2024-11-19 06:42:07.817579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:16.029 [2024-11-19 06:42:07.817588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:16.029 [2024-11-19 06:42:07.817595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:16.029 [2024-11-19 06:42:07.817604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:16.029 [2024-11-19 06:42:07.817611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:16.029 [2024-11-19 06:42:07.817619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:16.029 [2024-11-19 06:42:07.817627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:16.030 [2024-11-19 06:42:07.817635] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:16.030 [2024-11-19 06:42:07.817643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.030 [2024-11-19 06:42:07.817655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:16.030 [2024-11-19 06:42:07.817663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:16.030 [2024-11-19 06:42:07.817672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:16.030 [2024-11-19 06:42:07.817680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:16.030 [2024-11-19 06:42:07.817689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.817696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:16.030 [2024-11-19 06:42:07.817706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:18:16.030 [2024-11-19 06:42:07.817713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.030 [2024-11-19 06:42:07.849331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.849383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:16.030 [2024-11-19 06:42:07.849397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.554 ms 00:18:16.030 [2024-11-19 06:42:07.849405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.030 [2024-11-19 06:42:07.849541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.849552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:16.030 [2024-11-19 06:42:07.849562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:16.030 [2024-11-19 06:42:07.849570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.030 [2024-11-19 06:42:07.884339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.884381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.030 [2024-11-19 06:42:07.884399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.744 ms 00:18:16.030 [2024-11-19 06:42:07.884406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.030 [2024-11-19 06:42:07.884494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.884504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.030 [2024-11-19 06:42:07.884515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:16.030 [2024-11-19 06:42:07.884523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.030 [2024-11-19 06:42:07.885103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.885130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.030 [2024-11-19 06:42:07.885146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:18:16.030 [2024-11-19 06:42:07.885154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.030 [2024-11-19 06:42:07.885302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.885318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.030 [2024-11-19 06:42:07.885330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:18:16.030 [2024-11-19 06:42:07.885337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.030 [2024-11-19 06:42:07.902965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.903007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:16.030 [2024-11-19 06:42:07.903020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.601 ms 00:18:16.030 [2024-11-19 06:42:07.903028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.030 [2024-11-19 06:42:07.917301] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:16.030 [2024-11-19 06:42:07.917362] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:16.030 [2024-11-19 06:42:07.917377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.917385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:16.030 [2024-11-19 06:42:07.917397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.236 ms 00:18:16.030 [2024-11-19 06:42:07.917404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.030 [2024-11-19 06:42:07.947963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.030 [2024-11-19 06:42:07.948011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:16.030 [2024-11-19 06:42:07.948026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.462 ms 00:18:16.030 [2024-11-19 06:42:07.948034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:07.960962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:07.961147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:16.289 [2024-11-19 06:42:07.961176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.827 ms 00:18:16.289 [2024-11-19 06:42:07.961185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:07.974217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:07.974265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:16.289 [2024-11-19 06:42:07.974280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.941 ms 00:18:16.289 [2024-11-19 06:42:07.974288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:07.974964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:07.974995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:16.289 [2024-11-19 06:42:07.975008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:18:16.289 [2024-11-19 06:42:07.975017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:08.045550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:08.045624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:16.289 [2024-11-19 06:42:08.045645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.501 ms 00:18:16.289 [2024-11-19 06:42:08.045655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:08.056982] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:16.289 [2024-11-19 06:42:08.075758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:08.075816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:16.289 [2024-11-19 06:42:08.075832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.997 ms 00:18:16.289 [2024-11-19 06:42:08.075842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:08.075961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:08.075976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:16.289 [2024-11-19 06:42:08.075986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:16.289 [2024-11-19 06:42:08.075997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:08.076055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:08.076067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:16.289 [2024-11-19 06:42:08.076075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:16.289 [2024-11-19 06:42:08.076086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:08.076114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:08.076125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:16.289 [2024-11-19 06:42:08.076133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:16.289 [2024-11-19 06:42:08.076146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:08.076208] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:16.289 [2024-11-19 06:42:08.076225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:08.076233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:16.289 [2024-11-19 06:42:08.076247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:16.289 [2024-11-19 06:42:08.076255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:08.102424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:08.102475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:16.289 [2024-11-19 06:42:08.102492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.138 ms 00:18:16.289 [2024-11-19 06:42:08.102500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:08.102635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.289 [2024-11-19 06:42:08.102647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:16.289 [2024-11-19 06:42:08.102659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:16.289 [2024-11-19 06:42:08.102670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.289 [2024-11-19 06:42:08.103768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:16.289 [2024-11-19 06:42:08.107586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.642 ms, result 0 00:18:16.289 [2024-11-19 06:42:08.109776] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:16.289 Some configs were skipped because the RPC state that can call them passed over. 00:18:16.289 06:42:08 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:16.549 [2024-11-19 06:42:08.302134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.549 [2024-11-19 06:42:08.302281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:16.549 [2024-11-19 06:42:08.302302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.848 ms 00:18:16.549 [2024-11-19 06:42:08.302313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.549 [2024-11-19 06:42:08.302357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.072 ms, result 0 00:18:16.549 true 00:18:16.549 06:42:08 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:16.810 [2024-11-19 06:42:08.505853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.810 [2024-11-19 06:42:08.506080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:16.810 [2024-11-19 06:42:08.506112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.277 ms 00:18:16.810 [2024-11-19 06:42:08.506122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.810 [2024-11-19 06:42:08.506177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.604 ms, result 0 00:18:16.810 true 00:18:16.810 06:42:08 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74327 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74327 ']' 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74327 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74327 00:18:16.810 killing process with pid 74327 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74327' 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74327 00:18:16.810 06:42:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74327 00:18:17.756 [2024-11-19 06:42:09.348323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.348387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:17.756 [2024-11-19 06:42:09.348401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:17.756 [2024-11-19 06:42:09.348411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.348435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:17.756 [2024-11-19 06:42:09.351298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.351539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:17.756 [2024-11-19 06:42:09.351564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.843 ms 00:18:17.756 [2024-11-19 06:42:09.351572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.353066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.353100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:17.756 [2024-11-19 06:42:09.353113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:18:17.756 [2024-11-19 06:42:09.353122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.357564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.357593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:17.756 [2024-11-19 06:42:09.357607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.417 ms 00:18:17.756 [2024-11-19 06:42:09.357615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.364601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.364631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:17.756 [2024-11-19 06:42:09.364644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.948 ms 00:18:17.756 [2024-11-19 06:42:09.364653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.375119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.375149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:17.756 [2024-11-19 06:42:09.375163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.391 ms 00:18:17.756 [2024-11-19 06:42:09.375177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.383650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.383682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:17.756 [2024-11-19 06:42:09.383696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.418 ms 00:18:17.756 [2024-11-19 06:42:09.383704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.383845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.383856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:17.756 [2024-11-19 06:42:09.383867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:17.756 [2024-11-19 06:42:09.383875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.394746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.394775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:17.756 [2024-11-19 06:42:09.394787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.850 ms 00:18:17.756 [2024-11-19 06:42:09.394794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.404964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.404993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:17.756 [2024-11-19 06:42:09.405007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.132 ms 00:18:17.756 [2024-11-19 06:42:09.405014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.415024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.415053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:17.756 [2024-11-19 06:42:09.415067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.970 ms 00:18:17.756 [2024-11-19 06:42:09.415074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.424852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.756 [2024-11-19 06:42:09.424882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:17.756 [2024-11-19 06:42:09.424893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.714 ms 00:18:17.756 [2024-11-19 06:42:09.424900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.756 [2024-11-19 06:42:09.424949] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:17.756 [2024-11-19 06:42:09.424963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:17.756 [2024-11-19 06:42:09.424975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:17.756 [2024-11-19 06:42:09.424983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:17.756 [2024-11-19 06:42:09.424992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:17.756 [2024-11-19 06:42:09.425000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:17.756 [2024-11-19 06:42:09.425011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:17.756 [2024-11-19 06:42:09.425019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:17.756 [2024-11-19 06:42:09.425028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:17.756 [2024-11-19 06:42:09.425036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:17.756 [2024-11-19 06:42:09.425045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:17.757 [2024-11-19 06:42:09.425793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:17.758 [2024-11-19 06:42:09.425801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:17.758 [2024-11-19 06:42:09.425811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:17.758 [2024-11-19 06:42:09.425827] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:17.758 [2024-11-19 06:42:09.425842] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef7e4a2-b0d9-4872-9175-6abd80a5f735 00:18:17.758 [2024-11-19 06:42:09.425856] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:17.758 [2024-11-19 06:42:09.425868] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:17.758 [2024-11-19 06:42:09.425876] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:17.758 [2024-11-19 06:42:09.425886] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:17.758 [2024-11-19 06:42:09.425894] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:17.758 [2024-11-19 06:42:09.425903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:17.758 [2024-11-19 06:42:09.425910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:17.758 [2024-11-19 06:42:09.425918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:17.758 [2024-11-19 06:42:09.425935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:17.758 [2024-11-19 06:42:09.425944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.758 [2024-11-19 06:42:09.425952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:17.758 [2024-11-19 06:42:09.425962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:18:17.758 [2024-11-19 06:42:09.425970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.439331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.758 [2024-11-19 06:42:09.439361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:17.758 [2024-11-19 06:42:09.439375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.328 ms 00:18:17.758 [2024-11-19 06:42:09.439383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.439779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.758 [2024-11-19 06:42:09.439797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:17.758 [2024-11-19 06:42:09.439808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:18:17.758 [2024-11-19 06:42:09.439817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.487359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.487395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.758 [2024-11-19 06:42:09.487409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.487420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.487539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.487550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.758 [2024-11-19 06:42:09.487561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.487571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.487622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.487631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.758 [2024-11-19 06:42:09.487643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.487650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.487669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.487677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.758 [2024-11-19 06:42:09.487687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.487694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.570051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.570210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.758 [2024-11-19 06:42:09.570233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.570242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.637922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.637975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.758 [2024-11-19 06:42:09.637988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.637999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.638098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.638109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.758 [2024-11-19 06:42:09.638123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.638130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.638164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.638173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.758 [2024-11-19 06:42:09.638182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.638190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.638288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.638298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.758 [2024-11-19 06:42:09.638309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.638316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.638354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.638363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.758 [2024-11-19 06:42:09.638374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.638382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.638427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.638438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.758 [2024-11-19 06:42:09.638451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.638458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.638511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.758 [2024-11-19 06:42:09.638522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.758 [2024-11-19 06:42:09.638531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.758 [2024-11-19 06:42:09.638540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.758 [2024-11-19 06:42:09.638695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 290.342 ms, result 0 00:18:18.704 06:42:10 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:18.704 [2024-11-19 06:42:10.427110] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:18.704 [2024-11-19 06:42:10.427255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74380 ] 00:18:18.704 [2024-11-19 06:42:10.590499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.966 [2024-11-19 06:42:10.702775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.228 [2024-11-19 06:42:10.994017] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.228 [2024-11-19 06:42:10.994101] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.228 [2024-11-19 06:42:11.156991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.228 [2024-11-19 06:42:11.157055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:19.228 [2024-11-19 06:42:11.157071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:19.228 [2024-11-19 06:42:11.157080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.160474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.160703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.491 [2024-11-19 06:42:11.160726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:18:19.491 [2024-11-19 06:42:11.160735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.160892] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:19.491 [2024-11-19 06:42:11.161835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:19.491 [2024-11-19 06:42:11.161890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.161899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.491 [2024-11-19 06:42:11.161909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:18:19.491 [2024-11-19 06:42:11.161917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.163913] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:19.491 [2024-11-19 06:42:11.178886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.178951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:19.491 [2024-11-19 06:42:11.178966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.977 ms 00:18:19.491 [2024-11-19 06:42:11.178975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.179103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.179116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:19.491 [2024-11-19 06:42:11.179126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:19.491 [2024-11-19 06:42:11.179134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.188131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.188177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.491 [2024-11-19 06:42:11.188189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.949 ms 00:18:19.491 [2024-11-19 06:42:11.188198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.188313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.188323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.491 [2024-11-19 06:42:11.188333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:19.491 [2024-11-19 06:42:11.188342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.188372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.188385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:19.491 [2024-11-19 06:42:11.188394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:19.491 [2024-11-19 06:42:11.188402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.188426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:19.491 [2024-11-19 06:42:11.192643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.192684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.491 [2024-11-19 06:42:11.192696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.223 ms 00:18:19.491 [2024-11-19 06:42:11.192704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.192782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.192792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:19.491 [2024-11-19 06:42:11.192803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:19.491 [2024-11-19 06:42:11.192812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.192836] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:19.491 [2024-11-19 06:42:11.192861] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:19.491 [2024-11-19 06:42:11.192898] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:19.491 [2024-11-19 06:42:11.192915] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:19.491 [2024-11-19 06:42:11.193053] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:19.491 [2024-11-19 06:42:11.193066] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:19.491 [2024-11-19 06:42:11.193078] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:19.491 [2024-11-19 06:42:11.193089] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:19.491 [2024-11-19 06:42:11.193102] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:19.491 [2024-11-19 06:42:11.193111] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:19.491 [2024-11-19 06:42:11.193119] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:19.491 [2024-11-19 06:42:11.193127] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:19.491 [2024-11-19 06:42:11.193135] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:19.491 [2024-11-19 06:42:11.193144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.193153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:19.491 [2024-11-19 06:42:11.193161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:18:19.491 [2024-11-19 06:42:11.193169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.193258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.491 [2024-11-19 06:42:11.193268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:19.491 [2024-11-19 06:42:11.193278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:19.491 [2024-11-19 06:42:11.193285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.491 [2024-11-19 06:42:11.193392] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:19.491 [2024-11-19 06:42:11.193403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:19.491 [2024-11-19 06:42:11.193412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.491 [2024-11-19 06:42:11.193420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.491 [2024-11-19 06:42:11.193429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:19.491 [2024-11-19 06:42:11.193436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:19.491 [2024-11-19 06:42:11.193443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:19.491 [2024-11-19 06:42:11.193450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:19.491 [2024-11-19 06:42:11.193458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:19.491 [2024-11-19 06:42:11.193466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.491 [2024-11-19 06:42:11.193473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:19.491 [2024-11-19 06:42:11.193481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:19.491 [2024-11-19 06:42:11.193488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.491 [2024-11-19 06:42:11.193503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:19.491 [2024-11-19 06:42:11.193510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:19.491 [2024-11-19 06:42:11.193517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.491 [2024-11-19 06:42:11.193524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:19.491 [2024-11-19 06:42:11.193532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:19.491 [2024-11-19 06:42:11.193540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.491 [2024-11-19 06:42:11.193548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:19.491 [2024-11-19 06:42:11.193555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:19.491 [2024-11-19 06:42:11.193562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.491 [2024-11-19 06:42:11.193569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:19.491 [2024-11-19 06:42:11.193576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:19.491 [2024-11-19 06:42:11.193583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.491 [2024-11-19 06:42:11.193590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:19.491 [2024-11-19 06:42:11.193597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:19.491 [2024-11-19 06:42:11.193603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.491 [2024-11-19 06:42:11.193610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:19.491 [2024-11-19 06:42:11.193617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:19.491 [2024-11-19 06:42:11.193624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.491 [2024-11-19 06:42:11.193632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:19.491 [2024-11-19 06:42:11.193639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:19.492 [2024-11-19 06:42:11.193645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.492 [2024-11-19 06:42:11.193652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:19.492 [2024-11-19 06:42:11.193658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:19.492 [2024-11-19 06:42:11.193665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.492 [2024-11-19 06:42:11.193672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:19.492 [2024-11-19 06:42:11.193679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:19.492 [2024-11-19 06:42:11.193685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.492 [2024-11-19 06:42:11.193692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:19.492 [2024-11-19 06:42:11.193699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:19.492 [2024-11-19 06:42:11.193705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.492 [2024-11-19 06:42:11.193714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:19.492 [2024-11-19 06:42:11.193722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:19.492 [2024-11-19 06:42:11.193730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.492 [2024-11-19 06:42:11.193740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.492 [2024-11-19 06:42:11.193748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:19.492 [2024-11-19 06:42:11.193755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:19.492 [2024-11-19 06:42:11.193762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:19.492 [2024-11-19 06:42:11.193769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:19.492 [2024-11-19 06:42:11.193775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:19.492 [2024-11-19 06:42:11.193782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:19.492 [2024-11-19 06:42:11.193790] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:19.492 [2024-11-19 06:42:11.193801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.492 [2024-11-19 06:42:11.193810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:19.492 [2024-11-19 06:42:11.193817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:19.492 [2024-11-19 06:42:11.193824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:19.492 [2024-11-19 06:42:11.193832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:19.492 [2024-11-19 06:42:11.193839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:19.492 [2024-11-19 06:42:11.193846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:19.492 [2024-11-19 06:42:11.193854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:19.492 [2024-11-19 06:42:11.193861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:19.492 [2024-11-19 06:42:11.193871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:19.492 [2024-11-19 06:42:11.193878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:19.492 [2024-11-19 06:42:11.193885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:19.492 [2024-11-19 06:42:11.193892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:19.492 [2024-11-19 06:42:11.193900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:19.492 [2024-11-19 06:42:11.193907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:19.492 [2024-11-19 06:42:11.193914] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:19.492 [2024-11-19 06:42:11.193948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.492 [2024-11-19 06:42:11.193962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:19.492 [2024-11-19 06:42:11.193970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:19.492 [2024-11-19 06:42:11.193978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:19.492 [2024-11-19 06:42:11.193985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:19.492 [2024-11-19 06:42:11.193995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.194003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:19.492 [2024-11-19 06:42:11.194015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:18:19.492 [2024-11-19 06:42:11.194023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.227517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.227570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.492 [2024-11-19 06:42:11.227583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.434 ms 00:18:19.492 [2024-11-19 06:42:11.227598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.227748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.227763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:19.492 [2024-11-19 06:42:11.227773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:19.492 [2024-11-19 06:42:11.227781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.285460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.285524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.492 [2024-11-19 06:42:11.285540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.654 ms 00:18:19.492 [2024-11-19 06:42:11.285553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.285699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.285712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.492 [2024-11-19 06:42:11.285722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:19.492 [2024-11-19 06:42:11.285731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.286389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.286424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.492 [2024-11-19 06:42:11.286435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:18:19.492 [2024-11-19 06:42:11.286454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.286618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.286630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.492 [2024-11-19 06:42:11.286639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:18:19.492 [2024-11-19 06:42:11.286647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.303324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.303370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.492 [2024-11-19 06:42:11.303383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.652 ms 00:18:19.492 [2024-11-19 06:42:11.303391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.318142] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:19.492 [2024-11-19 06:42:11.318194] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:19.492 [2024-11-19 06:42:11.318209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.318218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:19.492 [2024-11-19 06:42:11.318228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.661 ms 00:18:19.492 [2024-11-19 06:42:11.318237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.344286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.344354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:19.492 [2024-11-19 06:42:11.344367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.941 ms 00:18:19.492 [2024-11-19 06:42:11.344376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.357368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.357561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:19.492 [2024-11-19 06:42:11.357583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.891 ms 00:18:19.492 [2024-11-19 06:42:11.357593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.370896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.370962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:19.492 [2024-11-19 06:42:11.370976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.213 ms 00:18:19.492 [2024-11-19 06:42:11.370985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.492 [2024-11-19 06:42:11.371880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.492 [2024-11-19 06:42:11.371917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:19.492 [2024-11-19 06:42:11.371955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:18:19.492 [2024-11-19 06:42:11.371964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.755 [2024-11-19 06:42:11.436867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.755 [2024-11-19 06:42:11.437169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:19.755 [2024-11-19 06:42:11.437197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.870 ms 00:18:19.755 [2024-11-19 06:42:11.437207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.755 [2024-11-19 06:42:11.449755] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:19.755 [2024-11-19 06:42:11.469753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.755 [2024-11-19 06:42:11.469809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:19.755 [2024-11-19 06:42:11.469824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.423 ms 00:18:19.755 [2024-11-19 06:42:11.469832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.755 [2024-11-19 06:42:11.469972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.755 [2024-11-19 06:42:11.469985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:19.755 [2024-11-19 06:42:11.469996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:19.755 [2024-11-19 06:42:11.470004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.755 [2024-11-19 06:42:11.470067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.755 [2024-11-19 06:42:11.470077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:19.755 [2024-11-19 06:42:11.470086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:19.755 [2024-11-19 06:42:11.470095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.755 [2024-11-19 06:42:11.470124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.755 [2024-11-19 06:42:11.470136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:19.755 [2024-11-19 06:42:11.470144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:19.755 [2024-11-19 06:42:11.470153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.755 [2024-11-19 06:42:11.470192] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:19.755 [2024-11-19 06:42:11.470203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.755 [2024-11-19 06:42:11.470211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:19.755 [2024-11-19 06:42:11.470219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:19.755 [2024-11-19 06:42:11.470228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.755 [2024-11-19 06:42:11.496327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.756 [2024-11-19 06:42:11.496527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:19.756 [2024-11-19 06:42:11.496550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.075 ms 00:18:19.756 [2024-11-19 06:42:11.496559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.756 [2024-11-19 06:42:11.496698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.756 [2024-11-19 06:42:11.496710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:19.756 [2024-11-19 06:42:11.496721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:19.756 [2024-11-19 06:42:11.496729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.756 [2024-11-19 06:42:11.497844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:19.756 [2024-11-19 06:42:11.501467] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.539 ms, result 0 00:18:19.756 [2024-11-19 06:42:11.502627] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:19.756 [2024-11-19 06:42:11.516190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:20.699  [2024-11-19T06:42:13.568Z] Copying: 14/256 [MB] (14 MBps) [2024-11-19T06:42:14.957Z] Copying: 41/256 [MB] (27 MBps) [2024-11-19T06:42:15.901Z] Copying: 52/256 [MB] (10 MBps) [2024-11-19T06:42:16.845Z] Copying: 69/256 [MB] (17 MBps) [2024-11-19T06:42:17.789Z] Copying: 81/256 [MB] (12 MBps) [2024-11-19T06:42:18.733Z] Copying: 91/256 [MB] (10 MBps) [2024-11-19T06:42:19.678Z] Copying: 109/256 [MB] (17 MBps) [2024-11-19T06:42:20.622Z] Copying: 127/256 [MB] (18 MBps) [2024-11-19T06:42:21.567Z] Copying: 144/256 [MB] (16 MBps) [2024-11-19T06:42:22.955Z] Copying: 155/256 [MB] (10 MBps) [2024-11-19T06:42:23.900Z] Copying: 166/256 [MB] (10 MBps) [2024-11-19T06:42:24.846Z] Copying: 186/256 [MB] (20 MBps) [2024-11-19T06:42:25.785Z] Copying: 203/256 [MB] (16 MBps) [2024-11-19T06:42:26.727Z] Copying: 225/256 [MB] (22 MBps) [2024-11-19T06:42:26.988Z] Copying: 246/256 [MB] (20 MBps) [2024-11-19T06:42:27.561Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-19 06:42:27.295424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.632 [2024-11-19 06:42:27.307738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.307962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:35.632 [2024-11-19 06:42:27.308096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:35.632 [2024-11-19 06:42:27.308124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.308174] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:35.632 [2024-11-19 06:42:27.311335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.311384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:35.632 [2024-11-19 06:42:27.311398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.142 ms 00:18:35.632 [2024-11-19 06:42:27.311407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.311759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.311772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:35.632 [2024-11-19 06:42:27.311783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:18:35.632 [2024-11-19 06:42:27.311792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.315507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.315537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:35.632 [2024-11-19 06:42:27.315547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:18:35.632 [2024-11-19 06:42:27.315555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.322742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.322908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:35.632 [2024-11-19 06:42:27.322947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.166 ms 00:18:35.632 [2024-11-19 06:42:27.322955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.348872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.348953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:35.632 [2024-11-19 06:42:27.348968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.839 ms 00:18:35.632 [2024-11-19 06:42:27.348976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.364712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.364784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:35.632 [2024-11-19 06:42:27.364798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.681 ms 00:18:35.632 [2024-11-19 06:42:27.364810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.364984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.364997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:35.632 [2024-11-19 06:42:27.365007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:35.632 [2024-11-19 06:42:27.365014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.391145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.391193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:35.632 [2024-11-19 06:42:27.391206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.102 ms 00:18:35.632 [2024-11-19 06:42:27.391213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.416341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.416388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:35.632 [2024-11-19 06:42:27.416401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.060 ms 00:18:35.632 [2024-11-19 06:42:27.416408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.441256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.441302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:35.632 [2024-11-19 06:42:27.441315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.796 ms 00:18:35.632 [2024-11-19 06:42:27.441322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.466441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.632 [2024-11-19 06:42:27.466627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:35.632 [2024-11-19 06:42:27.466647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.037 ms 00:18:35.632 [2024-11-19 06:42:27.466655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.632 [2024-11-19 06:42:27.466795] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:35.632 [2024-11-19 06:42:27.466829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.466997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.467005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.467014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.467022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.467030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.467038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.467046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:35.632 [2024-11-19 06:42:27.467053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:35.633 [2024-11-19 06:42:27.467679] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:35.633 [2024-11-19 06:42:27.467687] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef7e4a2-b0d9-4872-9175-6abd80a5f735 00:18:35.633 [2024-11-19 06:42:27.467696] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:35.633 [2024-11-19 06:42:27.467704] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:35.633 [2024-11-19 06:42:27.467713] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:35.633 [2024-11-19 06:42:27.467721] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:35.633 [2024-11-19 06:42:27.467730] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:35.633 [2024-11-19 06:42:27.467738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:35.633 [2024-11-19 06:42:27.467746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:35.633 [2024-11-19 06:42:27.467753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:35.633 [2024-11-19 06:42:27.467759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:35.633 [2024-11-19 06:42:27.467767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.633 [2024-11-19 06:42:27.467779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:35.633 [2024-11-19 06:42:27.467789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:18:35.633 [2024-11-19 06:42:27.467796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.633 [2024-11-19 06:42:27.481670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.633 [2024-11-19 06:42:27.481854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:35.634 [2024-11-19 06:42:27.481873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.852 ms 00:18:35.634 [2024-11-19 06:42:27.481881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.634 [2024-11-19 06:42:27.482333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.634 [2024-11-19 06:42:27.482347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:35.634 [2024-11-19 06:42:27.482358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:18:35.634 [2024-11-19 06:42:27.482366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.634 [2024-11-19 06:42:27.521903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.634 [2024-11-19 06:42:27.521979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.634 [2024-11-19 06:42:27.521991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.634 [2024-11-19 06:42:27.522000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.634 [2024-11-19 06:42:27.522117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.634 [2024-11-19 06:42:27.522127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.634 [2024-11-19 06:42:27.522136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.634 [2024-11-19 06:42:27.522144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.634 [2024-11-19 06:42:27.522203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.634 [2024-11-19 06:42:27.522212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.634 [2024-11-19 06:42:27.522222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.634 [2024-11-19 06:42:27.522230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.634 [2024-11-19 06:42:27.522249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.634 [2024-11-19 06:42:27.522261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.634 [2024-11-19 06:42:27.522269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.634 [2024-11-19 06:42:27.522276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.895 [2024-11-19 06:42:27.607615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.895 [2024-11-19 06:42:27.607675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.895 [2024-11-19 06:42:27.607690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.895 [2024-11-19 06:42:27.607698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.895 [2024-11-19 06:42:27.676997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.895 [2024-11-19 06:42:27.677063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.895 [2024-11-19 06:42:27.677075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.895 [2024-11-19 06:42:27.677083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.895 [2024-11-19 06:42:27.677163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.895 [2024-11-19 06:42:27.677174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.895 [2024-11-19 06:42:27.677184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.895 [2024-11-19 06:42:27.677193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.895 [2024-11-19 06:42:27.677227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.895 [2024-11-19 06:42:27.677239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.895 [2024-11-19 06:42:27.677252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.895 [2024-11-19 06:42:27.677261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.895 [2024-11-19 06:42:27.677360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.895 [2024-11-19 06:42:27.677370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.895 [2024-11-19 06:42:27.677379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.895 [2024-11-19 06:42:27.677387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.895 [2024-11-19 06:42:27.677423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.895 [2024-11-19 06:42:27.677434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:35.895 [2024-11-19 06:42:27.677443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.895 [2024-11-19 06:42:27.677454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.895 [2024-11-19 06:42:27.677499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.895 [2024-11-19 06:42:27.677509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.895 [2024-11-19 06:42:27.677519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.895 [2024-11-19 06:42:27.677527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.895 [2024-11-19 06:42:27.677577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.895 [2024-11-19 06:42:27.677587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.895 [2024-11-19 06:42:27.677600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.895 [2024-11-19 06:42:27.677608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.895 [2024-11-19 06:42:27.677763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.026 ms, result 0 00:18:36.467 00:18:36.467 00:18:36.729 06:42:28 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:37.300 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:37.300 06:42:28 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:37.300 06:42:28 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:37.301 06:42:28 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:37.301 06:42:28 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:37.301 06:42:28 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:37.301 06:42:29 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:37.301 Process with pid 74327 is not found 00:18:37.301 06:42:29 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74327 00:18:37.301 06:42:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74327 ']' 00:18:37.301 06:42:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74327 00:18:37.301 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74327) - No such process 00:18:37.301 06:42:29 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 74327 is not found' 00:18:37.301 ************************************ 00:18:37.301 END TEST ftl_trim 00:18:37.301 ************************************ 00:18:37.301 00:18:37.301 real 1m19.107s 00:18:37.301 user 1m35.154s 00:18:37.301 sys 0m14.888s 00:18:37.301 06:42:29 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.301 06:42:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:37.301 06:42:29 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:37.301 06:42:29 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:37.301 06:42:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.301 06:42:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:37.301 ************************************ 00:18:37.301 START TEST ftl_restore 00:18:37.301 ************************************ 00:18:37.301 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:37.301 * Looking for test storage... 00:18:37.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.561 06:42:29 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:37.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.561 --rc genhtml_branch_coverage=1 00:18:37.561 --rc genhtml_function_coverage=1 00:18:37.561 --rc genhtml_legend=1 00:18:37.561 --rc geninfo_all_blocks=1 00:18:37.561 --rc geninfo_unexecuted_blocks=1 00:18:37.561 00:18:37.561 ' 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:37.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.561 --rc genhtml_branch_coverage=1 00:18:37.561 --rc genhtml_function_coverage=1 00:18:37.561 --rc genhtml_legend=1 00:18:37.561 --rc geninfo_all_blocks=1 00:18:37.561 --rc geninfo_unexecuted_blocks=1 00:18:37.561 00:18:37.561 ' 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:37.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.561 --rc genhtml_branch_coverage=1 00:18:37.561 --rc genhtml_function_coverage=1 00:18:37.561 --rc genhtml_legend=1 00:18:37.561 --rc geninfo_all_blocks=1 00:18:37.561 --rc geninfo_unexecuted_blocks=1 00:18:37.561 00:18:37.561 ' 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:37.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.561 --rc genhtml_branch_coverage=1 00:18:37.561 --rc genhtml_function_coverage=1 00:18:37.561 --rc genhtml_legend=1 00:18:37.561 --rc geninfo_all_blocks=1 00:18:37.561 --rc geninfo_unexecuted_blocks=1 00:18:37.561 00:18:37.561 ' 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.nu9uhu0RGy 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74647 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74647 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 74647 ']' 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.561 06:42:29 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.561 06:42:29 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:37.561 [2024-11-19 06:42:29.431090] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:37.561 [2024-11-19 06:42:29.431487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74647 ] 00:18:37.823 [2024-11-19 06:42:29.589886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.823 [2024-11-19 06:42:29.710387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.768 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.768 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:18:38.768 06:42:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:38.768 06:42:30 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:38.768 06:42:30 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:38.768 06:42:30 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:38.768 06:42:30 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:38.769 06:42:30 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:38.769 06:42:30 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:38.769 06:42:30 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:39.031 06:42:30 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:39.031 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:39.031 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:39.031 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:39.031 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:39.031 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:39.031 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:39.031 { 00:18:39.031 "name": "nvme0n1", 00:18:39.031 "aliases": [ 00:18:39.031 "b254f16f-b133-4bfb-8d45-d348a606e5aa" 00:18:39.031 ], 00:18:39.031 "product_name": "NVMe disk", 00:18:39.031 "block_size": 4096, 00:18:39.031 "num_blocks": 1310720, 00:18:39.031 "uuid": "b254f16f-b133-4bfb-8d45-d348a606e5aa", 00:18:39.031 "numa_id": -1, 00:18:39.031 "assigned_rate_limits": { 00:18:39.031 "rw_ios_per_sec": 0, 00:18:39.031 "rw_mbytes_per_sec": 0, 00:18:39.031 "r_mbytes_per_sec": 0, 00:18:39.031 "w_mbytes_per_sec": 0 00:18:39.031 }, 00:18:39.031 "claimed": true, 00:18:39.031 "claim_type": "read_many_write_one", 00:18:39.031 "zoned": false, 00:18:39.031 "supported_io_types": { 00:18:39.031 "read": true, 00:18:39.031 "write": true, 00:18:39.031 "unmap": true, 00:18:39.031 "flush": true, 00:18:39.031 "reset": true, 00:18:39.031 "nvme_admin": true, 00:18:39.031 "nvme_io": true, 00:18:39.031 "nvme_io_md": false, 00:18:39.031 "write_zeroes": true, 00:18:39.031 "zcopy": false, 00:18:39.031 "get_zone_info": false, 00:18:39.031 "zone_management": false, 00:18:39.031 "zone_append": false, 00:18:39.031 "compare": true, 00:18:39.031 "compare_and_write": false, 00:18:39.031 "abort": true, 00:18:39.031 "seek_hole": false, 00:18:39.031 "seek_data": false, 00:18:39.031 "copy": true, 00:18:39.031 "nvme_iov_md": false 00:18:39.031 }, 00:18:39.031 "driver_specific": { 00:18:39.031 "nvme": [ 00:18:39.031 { 00:18:39.031 "pci_address": "0000:00:11.0", 00:18:39.031 "trid": { 00:18:39.031 "trtype": "PCIe", 00:18:39.031 "traddr": "0000:00:11.0" 00:18:39.031 }, 00:18:39.031 "ctrlr_data": { 00:18:39.031 "cntlid": 0, 00:18:39.031 "vendor_id": "0x1b36", 00:18:39.031 "model_number": "QEMU NVMe Ctrl", 00:18:39.031 "serial_number": "12341", 00:18:39.031 "firmware_revision": "8.0.0", 00:18:39.031 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:39.031 "oacs": { 00:18:39.031 "security": 0, 00:18:39.031 "format": 1, 00:18:39.031 "firmware": 0, 00:18:39.031 "ns_manage": 1 00:18:39.031 }, 00:18:39.031 "multi_ctrlr": false, 00:18:39.031 "ana_reporting": false 00:18:39.031 }, 00:18:39.031 "vs": { 00:18:39.031 "nvme_version": "1.4" 00:18:39.031 }, 00:18:39.031 "ns_data": { 00:18:39.031 "id": 1, 00:18:39.031 "can_share": false 00:18:39.031 } 00:18:39.031 } 00:18:39.031 ], 00:18:39.031 "mp_policy": "active_passive" 00:18:39.031 } 00:18:39.031 } 00:18:39.031 ]' 00:18:39.031 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:39.031 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:39.031 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:39.293 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:39.293 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:39.293 06:42:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:18:39.293 06:42:30 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:39.293 06:42:30 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:39.293 06:42:30 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:39.293 06:42:30 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:39.293 06:42:30 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:39.293 06:42:31 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=be657fd7-4215-43b2-a9dc-883ec14c429d 00:18:39.293 06:42:31 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:39.293 06:42:31 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u be657fd7-4215-43b2-a9dc-883ec14c429d 00:18:39.555 06:42:31 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:39.816 06:42:31 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=d497fe9f-02d0-4236-b996-0b65a0764120 00:18:39.817 06:42:31 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d497fe9f-02d0-4236-b996-0b65a0764120 00:18:40.079 06:42:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=c403e183-9455-402e-931f-d42e3d6ad65f 00:18:40.079 06:42:31 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:40.079 06:42:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c403e183-9455-402e-931f-d42e3d6ad65f 00:18:40.079 06:42:31 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:40.079 06:42:31 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:40.079 06:42:31 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=c403e183-9455-402e-931f-d42e3d6ad65f 00:18:40.079 06:42:31 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:40.079 06:42:31 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size c403e183-9455-402e-931f-d42e3d6ad65f 00:18:40.079 06:42:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c403e183-9455-402e-931f-d42e3d6ad65f 00:18:40.079 06:42:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:40.079 06:42:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:40.079 06:42:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:40.079 06:42:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c403e183-9455-402e-931f-d42e3d6ad65f 00:18:40.341 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:40.341 { 00:18:40.341 "name": "c403e183-9455-402e-931f-d42e3d6ad65f", 00:18:40.341 "aliases": [ 00:18:40.341 "lvs/nvme0n1p0" 00:18:40.341 ], 00:18:40.341 "product_name": "Logical Volume", 00:18:40.341 "block_size": 4096, 00:18:40.341 "num_blocks": 26476544, 00:18:40.341 "uuid": "c403e183-9455-402e-931f-d42e3d6ad65f", 00:18:40.341 "assigned_rate_limits": { 00:18:40.341 "rw_ios_per_sec": 0, 00:18:40.341 "rw_mbytes_per_sec": 0, 00:18:40.341 "r_mbytes_per_sec": 0, 00:18:40.341 "w_mbytes_per_sec": 0 00:18:40.341 }, 00:18:40.341 "claimed": false, 00:18:40.341 "zoned": false, 00:18:40.341 "supported_io_types": { 00:18:40.341 "read": true, 00:18:40.341 "write": true, 00:18:40.341 "unmap": true, 00:18:40.341 "flush": false, 00:18:40.341 "reset": true, 00:18:40.341 "nvme_admin": false, 00:18:40.341 "nvme_io": false, 00:18:40.341 "nvme_io_md": false, 00:18:40.341 "write_zeroes": true, 00:18:40.341 "zcopy": false, 00:18:40.341 "get_zone_info": false, 00:18:40.341 "zone_management": false, 00:18:40.341 "zone_append": false, 00:18:40.341 "compare": false, 00:18:40.341 "compare_and_write": false, 00:18:40.341 "abort": false, 00:18:40.341 "seek_hole": true, 00:18:40.342 "seek_data": true, 00:18:40.342 "copy": false, 00:18:40.342 "nvme_iov_md": false 00:18:40.342 }, 00:18:40.342 "driver_specific": { 00:18:40.342 "lvol": { 00:18:40.342 "lvol_store_uuid": "d497fe9f-02d0-4236-b996-0b65a0764120", 00:18:40.342 "base_bdev": "nvme0n1", 00:18:40.342 "thin_provision": true, 00:18:40.342 "num_allocated_clusters": 0, 00:18:40.342 "snapshot": false, 00:18:40.342 "clone": false, 00:18:40.342 "esnap_clone": false 00:18:40.342 } 00:18:40.342 } 00:18:40.342 } 00:18:40.342 ]' 00:18:40.342 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:40.342 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:40.342 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:40.342 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:40.342 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:40.342 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:40.342 06:42:32 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:40.342 06:42:32 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:40.342 06:42:32 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:40.603 06:42:32 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:40.603 06:42:32 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:40.603 06:42:32 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size c403e183-9455-402e-931f-d42e3d6ad65f 00:18:40.603 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c403e183-9455-402e-931f-d42e3d6ad65f 00:18:40.603 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:40.603 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:40.603 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:40.603 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c403e183-9455-402e-931f-d42e3d6ad65f 00:18:40.863 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:40.863 { 00:18:40.863 "name": "c403e183-9455-402e-931f-d42e3d6ad65f", 00:18:40.863 "aliases": [ 00:18:40.863 "lvs/nvme0n1p0" 00:18:40.863 ], 00:18:40.863 "product_name": "Logical Volume", 00:18:40.863 "block_size": 4096, 00:18:40.863 "num_blocks": 26476544, 00:18:40.863 "uuid": "c403e183-9455-402e-931f-d42e3d6ad65f", 00:18:40.863 "assigned_rate_limits": { 00:18:40.863 "rw_ios_per_sec": 0, 00:18:40.863 "rw_mbytes_per_sec": 0, 00:18:40.863 "r_mbytes_per_sec": 0, 00:18:40.863 "w_mbytes_per_sec": 0 00:18:40.863 }, 00:18:40.863 "claimed": false, 00:18:40.863 "zoned": false, 00:18:40.863 "supported_io_types": { 00:18:40.863 "read": true, 00:18:40.863 "write": true, 00:18:40.863 "unmap": true, 00:18:40.863 "flush": false, 00:18:40.863 "reset": true, 00:18:40.863 "nvme_admin": false, 00:18:40.863 "nvme_io": false, 00:18:40.863 "nvme_io_md": false, 00:18:40.863 "write_zeroes": true, 00:18:40.863 "zcopy": false, 00:18:40.863 "get_zone_info": false, 00:18:40.863 "zone_management": false, 00:18:40.863 "zone_append": false, 00:18:40.863 "compare": false, 00:18:40.863 "compare_and_write": false, 00:18:40.863 "abort": false, 00:18:40.863 "seek_hole": true, 00:18:40.863 "seek_data": true, 00:18:40.863 "copy": false, 00:18:40.863 "nvme_iov_md": false 00:18:40.863 }, 00:18:40.863 "driver_specific": { 00:18:40.863 "lvol": { 00:18:40.863 "lvol_store_uuid": "d497fe9f-02d0-4236-b996-0b65a0764120", 00:18:40.863 "base_bdev": "nvme0n1", 00:18:40.863 "thin_provision": true, 00:18:40.863 "num_allocated_clusters": 0, 00:18:40.863 "snapshot": false, 00:18:40.863 "clone": false, 00:18:40.863 "esnap_clone": false 00:18:40.863 } 00:18:40.863 } 00:18:40.863 } 00:18:40.863 ]' 00:18:40.863 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:40.863 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:40.863 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:40.863 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:40.863 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:40.863 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:40.863 06:42:32 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:40.863 06:42:32 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:41.121 06:42:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:41.121 06:42:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size c403e183-9455-402e-931f-d42e3d6ad65f 00:18:41.121 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c403e183-9455-402e-931f-d42e3d6ad65f 00:18:41.121 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:41.121 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:41.121 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:41.121 06:42:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c403e183-9455-402e-931f-d42e3d6ad65f 00:18:41.380 06:42:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:41.380 { 00:18:41.380 "name": "c403e183-9455-402e-931f-d42e3d6ad65f", 00:18:41.380 "aliases": [ 00:18:41.380 "lvs/nvme0n1p0" 00:18:41.380 ], 00:18:41.380 "product_name": "Logical Volume", 00:18:41.380 "block_size": 4096, 00:18:41.380 "num_blocks": 26476544, 00:18:41.380 "uuid": "c403e183-9455-402e-931f-d42e3d6ad65f", 00:18:41.380 "assigned_rate_limits": { 00:18:41.380 "rw_ios_per_sec": 0, 00:18:41.380 "rw_mbytes_per_sec": 0, 00:18:41.380 "r_mbytes_per_sec": 0, 00:18:41.380 "w_mbytes_per_sec": 0 00:18:41.380 }, 00:18:41.380 "claimed": false, 00:18:41.380 "zoned": false, 00:18:41.380 "supported_io_types": { 00:18:41.380 "read": true, 00:18:41.380 "write": true, 00:18:41.380 "unmap": true, 00:18:41.380 "flush": false, 00:18:41.380 "reset": true, 00:18:41.380 "nvme_admin": false, 00:18:41.380 "nvme_io": false, 00:18:41.380 "nvme_io_md": false, 00:18:41.380 "write_zeroes": true, 00:18:41.380 "zcopy": false, 00:18:41.380 "get_zone_info": false, 00:18:41.380 "zone_management": false, 00:18:41.380 "zone_append": false, 00:18:41.380 "compare": false, 00:18:41.380 "compare_and_write": false, 00:18:41.380 "abort": false, 00:18:41.380 "seek_hole": true, 00:18:41.380 "seek_data": true, 00:18:41.380 "copy": false, 00:18:41.380 "nvme_iov_md": false 00:18:41.380 }, 00:18:41.380 "driver_specific": { 00:18:41.380 "lvol": { 00:18:41.380 "lvol_store_uuid": "d497fe9f-02d0-4236-b996-0b65a0764120", 00:18:41.380 "base_bdev": "nvme0n1", 00:18:41.380 "thin_provision": true, 00:18:41.380 "num_allocated_clusters": 0, 00:18:41.380 "snapshot": false, 00:18:41.380 "clone": false, 00:18:41.380 "esnap_clone": false 00:18:41.380 } 00:18:41.380 } 00:18:41.380 } 00:18:41.380 ]' 00:18:41.380 06:42:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:41.380 06:42:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:41.380 06:42:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:41.380 06:42:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:41.380 06:42:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:41.380 06:42:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:41.380 06:42:33 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:41.380 06:42:33 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c403e183-9455-402e-931f-d42e3d6ad65f --l2p_dram_limit 10' 00:18:41.380 06:42:33 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:41.380 06:42:33 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:41.380 06:42:33 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:41.380 06:42:33 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:41.380 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:41.380 06:42:33 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c403e183-9455-402e-931f-d42e3d6ad65f --l2p_dram_limit 10 -c nvc0n1p0 00:18:41.684 [2024-11-19 06:42:33.405400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.405439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:41.684 [2024-11-19 06:42:33.405451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:41.684 [2024-11-19 06:42:33.405458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.405497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.405504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.684 [2024-11-19 06:42:33.405512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:41.684 [2024-11-19 06:42:33.405519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.405537] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:41.684 [2024-11-19 06:42:33.406114] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:41.684 [2024-11-19 06:42:33.406134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.406140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.684 [2024-11-19 06:42:33.406149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:18:41.684 [2024-11-19 06:42:33.406154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.406205] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2f4e8d2c-3c6e-43e9-8154-ef14d4021b95 00:18:41.684 [2024-11-19 06:42:33.407120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.407150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:41.684 [2024-11-19 06:42:33.407158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:41.684 [2024-11-19 06:42:33.407166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.411959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.411986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.684 [2024-11-19 06:42:33.411995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.755 ms 00:18:41.684 [2024-11-19 06:42:33.412002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.412067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.412076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.684 [2024-11-19 06:42:33.412082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:41.684 [2024-11-19 06:42:33.412091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.412124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.412133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:41.684 [2024-11-19 06:42:33.412140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:41.684 [2024-11-19 06:42:33.412148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.412164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.684 [2024-11-19 06:42:33.415028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.415056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.684 [2024-11-19 06:42:33.415066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:18:41.684 [2024-11-19 06:42:33.415072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.415098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.415105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:41.684 [2024-11-19 06:42:33.415113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:41.684 [2024-11-19 06:42:33.415119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.415132] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:41.684 [2024-11-19 06:42:33.415232] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:41.684 [2024-11-19 06:42:33.415244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:41.684 [2024-11-19 06:42:33.415252] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:41.684 [2024-11-19 06:42:33.415261] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:41.684 [2024-11-19 06:42:33.415268] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:41.684 [2024-11-19 06:42:33.415275] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:41.684 [2024-11-19 06:42:33.415280] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:41.684 [2024-11-19 06:42:33.415288] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:41.684 [2024-11-19 06:42:33.415294] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:41.684 [2024-11-19 06:42:33.415300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.415306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:41.684 [2024-11-19 06:42:33.415313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:18:41.684 [2024-11-19 06:42:33.415324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.684 [2024-11-19 06:42:33.415389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.684 [2024-11-19 06:42:33.415395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:41.684 [2024-11-19 06:42:33.415402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:41.684 [2024-11-19 06:42:33.415407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.685 [2024-11-19 06:42:33.415494] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:41.685 [2024-11-19 06:42:33.415501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:41.685 [2024-11-19 06:42:33.415509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.685 [2024-11-19 06:42:33.415514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:41.685 [2024-11-19 06:42:33.415526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:41.685 [2024-11-19 06:42:33.415537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:41.685 [2024-11-19 06:42:33.415544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.685 [2024-11-19 06:42:33.415555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:41.685 [2024-11-19 06:42:33.415560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:41.685 [2024-11-19 06:42:33.415566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.685 [2024-11-19 06:42:33.415572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:41.685 [2024-11-19 06:42:33.415578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:41.685 [2024-11-19 06:42:33.415583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:41.685 [2024-11-19 06:42:33.415596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:41.685 [2024-11-19 06:42:33.415603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:41.685 [2024-11-19 06:42:33.415615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.685 [2024-11-19 06:42:33.415625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:41.685 [2024-11-19 06:42:33.415632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.685 [2024-11-19 06:42:33.415644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:41.685 [2024-11-19 06:42:33.415650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.685 [2024-11-19 06:42:33.415661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:41.685 [2024-11-19 06:42:33.415666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.685 [2024-11-19 06:42:33.415677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:41.685 [2024-11-19 06:42:33.415684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.685 [2024-11-19 06:42:33.415695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:41.685 [2024-11-19 06:42:33.415700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:41.685 [2024-11-19 06:42:33.415706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.685 [2024-11-19 06:42:33.415711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:41.685 [2024-11-19 06:42:33.415718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:41.685 [2024-11-19 06:42:33.415722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:41.685 [2024-11-19 06:42:33.415733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:41.685 [2024-11-19 06:42:33.415739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415744] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:41.685 [2024-11-19 06:42:33.415751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:41.685 [2024-11-19 06:42:33.415756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.685 [2024-11-19 06:42:33.415763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.685 [2024-11-19 06:42:33.415769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:41.685 [2024-11-19 06:42:33.415776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:41.685 [2024-11-19 06:42:33.415781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:41.685 [2024-11-19 06:42:33.415788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:41.685 [2024-11-19 06:42:33.415793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:41.685 [2024-11-19 06:42:33.415799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:41.685 [2024-11-19 06:42:33.415806] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:41.685 [2024-11-19 06:42:33.415814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.685 [2024-11-19 06:42:33.415822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:41.685 [2024-11-19 06:42:33.415829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:41.685 [2024-11-19 06:42:33.415834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:41.685 [2024-11-19 06:42:33.415841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:41.685 [2024-11-19 06:42:33.415847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:41.685 [2024-11-19 06:42:33.415854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:41.685 [2024-11-19 06:42:33.415859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:41.685 [2024-11-19 06:42:33.415865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:41.685 [2024-11-19 06:42:33.415871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:41.685 [2024-11-19 06:42:33.415879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:41.685 [2024-11-19 06:42:33.415884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:41.685 [2024-11-19 06:42:33.415890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:41.685 [2024-11-19 06:42:33.415896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:41.685 [2024-11-19 06:42:33.415903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:41.685 [2024-11-19 06:42:33.415908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:41.685 [2024-11-19 06:42:33.415915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.685 [2024-11-19 06:42:33.415921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:41.685 [2024-11-19 06:42:33.415937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:41.685 [2024-11-19 06:42:33.415943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:41.685 [2024-11-19 06:42:33.415949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:41.685 [2024-11-19 06:42:33.415955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.685 [2024-11-19 06:42:33.415962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:41.685 [2024-11-19 06:42:33.415967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:18:41.685 [2024-11-19 06:42:33.415973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.685 [2024-11-19 06:42:33.416002] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:41.685 [2024-11-19 06:42:33.416012] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:45.918 [2024-11-19 06:42:37.094771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.094864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:45.918 [2024-11-19 06:42:37.094882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3678.751 ms 00:18:45.918 [2024-11-19 06:42:37.094894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.126529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.126600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.918 [2024-11-19 06:42:37.126615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.394 ms 00:18:45.918 [2024-11-19 06:42:37.126626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.126767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.126781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:45.918 [2024-11-19 06:42:37.126790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:45.918 [2024-11-19 06:42:37.126804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.161915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.161991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.918 [2024-11-19 06:42:37.162003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.074 ms 00:18:45.918 [2024-11-19 06:42:37.162014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.162050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.162065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.918 [2024-11-19 06:42:37.162074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:45.918 [2024-11-19 06:42:37.162084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.162648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.162674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.918 [2024-11-19 06:42:37.162685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:18:45.918 [2024-11-19 06:42:37.162695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.162806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.162818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.918 [2024-11-19 06:42:37.162829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:45.918 [2024-11-19 06:42:37.162842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.180189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.180400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.918 [2024-11-19 06:42:37.180420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.328 ms 00:18:45.918 [2024-11-19 06:42:37.180430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.193589] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:45.918 [2024-11-19 06:42:37.197337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.197379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:45.918 [2024-11-19 06:42:37.197392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.792 ms 00:18:45.918 [2024-11-19 06:42:37.197400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.317151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.317214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:45.918 [2024-11-19 06:42:37.317233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.712 ms 00:18:45.918 [2024-11-19 06:42:37.317243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.317456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.317473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:45.918 [2024-11-19 06:42:37.317488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:18:45.918 [2024-11-19 06:42:37.317497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.343657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.343708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:45.918 [2024-11-19 06:42:37.343725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.103 ms 00:18:45.918 [2024-11-19 06:42:37.343733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.368418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.368463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:45.918 [2024-11-19 06:42:37.368480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.629 ms 00:18:45.918 [2024-11-19 06:42:37.368488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.369156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.369176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:45.918 [2024-11-19 06:42:37.369188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:18:45.918 [2024-11-19 06:42:37.369196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.454567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.454632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:45.918 [2024-11-19 06:42:37.454652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.319 ms 00:18:45.918 [2024-11-19 06:42:37.454661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.482035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.482087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:45.918 [2024-11-19 06:42:37.482102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.299 ms 00:18:45.918 [2024-11-19 06:42:37.482110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.507301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.507348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:45.918 [2024-11-19 06:42:37.507363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.158 ms 00:18:45.918 [2024-11-19 06:42:37.507371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.533802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.533850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:45.918 [2024-11-19 06:42:37.533864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.399 ms 00:18:45.918 [2024-11-19 06:42:37.533873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.533906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.533915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:45.918 [2024-11-19 06:42:37.533946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:45.918 [2024-11-19 06:42:37.533955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.918 [2024-11-19 06:42:37.534049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.918 [2024-11-19 06:42:37.534060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:45.919 [2024-11-19 06:42:37.534074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:45.919 [2024-11-19 06:42:37.534082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.919 [2024-11-19 06:42:37.535213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4129.282 ms, result 0 00:18:45.919 { 00:18:45.919 "name": "ftl0", 00:18:45.919 "uuid": "2f4e8d2c-3c6e-43e9-8154-ef14d4021b95" 00:18:45.919 } 00:18:45.919 06:42:37 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:45.919 06:42:37 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:45.919 06:42:37 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:45.919 06:42:37 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:46.178 [2024-11-19 06:42:38.046663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.178 [2024-11-19 06:42:38.046723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:46.178 [2024-11-19 06:42:38.046736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:46.178 [2024-11-19 06:42:38.046754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.178 [2024-11-19 06:42:38.046777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.178 [2024-11-19 06:42:38.049759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.178 [2024-11-19 06:42:38.049800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:46.178 [2024-11-19 06:42:38.049815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.959 ms 00:18:46.178 [2024-11-19 06:42:38.049823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.178 [2024-11-19 06:42:38.050110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.178 [2024-11-19 06:42:38.050121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:46.178 [2024-11-19 06:42:38.050137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:18:46.178 [2024-11-19 06:42:38.050145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.178 [2024-11-19 06:42:38.053394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.178 [2024-11-19 06:42:38.053419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:46.178 [2024-11-19 06:42:38.053431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:18:46.178 [2024-11-19 06:42:38.053439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.178 [2024-11-19 06:42:38.059748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.178 [2024-11-19 06:42:38.059783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:46.178 [2024-11-19 06:42:38.059799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.286 ms 00:18:46.178 [2024-11-19 06:42:38.059807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.178 [2024-11-19 06:42:38.085168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.178 [2024-11-19 06:42:38.085215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:46.178 [2024-11-19 06:42:38.085230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.278 ms 00:18:46.178 [2024-11-19 06:42:38.085238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.178 [2024-11-19 06:42:38.102904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.178 [2024-11-19 06:42:38.102961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:46.178 [2024-11-19 06:42:38.102978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.613 ms 00:18:46.178 [2024-11-19 06:42:38.102985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.178 [2024-11-19 06:42:38.103153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.178 [2024-11-19 06:42:38.103165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:46.178 [2024-11-19 06:42:38.103177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:46.178 [2024-11-19 06:42:38.103185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.440 [2024-11-19 06:42:38.128087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.440 [2024-11-19 06:42:38.128130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:46.440 [2024-11-19 06:42:38.128144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.880 ms 00:18:46.440 [2024-11-19 06:42:38.128150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.440 [2024-11-19 06:42:38.152427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.440 [2024-11-19 06:42:38.152468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:46.440 [2024-11-19 06:42:38.152482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.226 ms 00:18:46.440 [2024-11-19 06:42:38.152489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.440 [2024-11-19 06:42:38.177033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.440 [2024-11-19 06:42:38.177075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:46.440 [2024-11-19 06:42:38.177089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.491 ms 00:18:46.440 [2024-11-19 06:42:38.177095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.440 [2024-11-19 06:42:38.201619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.440 [2024-11-19 06:42:38.201666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:46.440 [2024-11-19 06:42:38.201680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.428 ms 00:18:46.440 [2024-11-19 06:42:38.201687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.440 [2024-11-19 06:42:38.201738] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:46.440 [2024-11-19 06:42:38.201753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.201997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:46.440 [2024-11-19 06:42:38.202247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:46.441 [2024-11-19 06:42:38.202747] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:46.441 [2024-11-19 06:42:38.202759] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f4e8d2c-3c6e-43e9-8154-ef14d4021b95 00:18:46.441 [2024-11-19 06:42:38.202767] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:46.441 [2024-11-19 06:42:38.202779] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:46.441 [2024-11-19 06:42:38.202787] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:46.441 [2024-11-19 06:42:38.202800] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:46.441 [2024-11-19 06:42:38.202807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:46.441 [2024-11-19 06:42:38.202816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:46.441 [2024-11-19 06:42:38.202823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:46.441 [2024-11-19 06:42:38.202831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:46.441 [2024-11-19 06:42:38.202838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:46.441 [2024-11-19 06:42:38.202847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.441 [2024-11-19 06:42:38.202854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:46.441 [2024-11-19 06:42:38.202864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:18:46.441 [2024-11-19 06:42:38.202873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.441 [2024-11-19 06:42:38.216343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.441 [2024-11-19 06:42:38.216385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:46.441 [2024-11-19 06:42:38.216399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.422 ms 00:18:46.441 [2024-11-19 06:42:38.216406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.441 [2024-11-19 06:42:38.216798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.441 [2024-11-19 06:42:38.216809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:46.441 [2024-11-19 06:42:38.216820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:18:46.441 [2024-11-19 06:42:38.216830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.441 [2024-11-19 06:42:38.263308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.441 [2024-11-19 06:42:38.263353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.441 [2024-11-19 06:42:38.263367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.441 [2024-11-19 06:42:38.263376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.441 [2024-11-19 06:42:38.263445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.441 [2024-11-19 06:42:38.263477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.441 [2024-11-19 06:42:38.263488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.441 [2024-11-19 06:42:38.263499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.441 [2024-11-19 06:42:38.263581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.441 [2024-11-19 06:42:38.263592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.441 [2024-11-19 06:42:38.263602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.441 [2024-11-19 06:42:38.263610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.441 [2024-11-19 06:42:38.263632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.441 [2024-11-19 06:42:38.263640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.441 [2024-11-19 06:42:38.263650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.441 [2024-11-19 06:42:38.263657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.441 [2024-11-19 06:42:38.348086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.441 [2024-11-19 06:42:38.348137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.441 [2024-11-19 06:42:38.348152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.441 [2024-11-19 06:42:38.348160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.703 [2024-11-19 06:42:38.417681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.703 [2024-11-19 06:42:38.417736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.703 [2024-11-19 06:42:38.417751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.703 [2024-11-19 06:42:38.417763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.703 [2024-11-19 06:42:38.417870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.703 [2024-11-19 06:42:38.417881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.703 [2024-11-19 06:42:38.417893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.703 [2024-11-19 06:42:38.417901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.703 [2024-11-19 06:42:38.417983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.703 [2024-11-19 06:42:38.417994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.703 [2024-11-19 06:42:38.418006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.703 [2024-11-19 06:42:38.418014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.703 [2024-11-19 06:42:38.418121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.703 [2024-11-19 06:42:38.418133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.703 [2024-11-19 06:42:38.418145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.703 [2024-11-19 06:42:38.418152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.703 [2024-11-19 06:42:38.418187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.704 [2024-11-19 06:42:38.418197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:46.704 [2024-11-19 06:42:38.418207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.704 [2024-11-19 06:42:38.418215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.704 [2024-11-19 06:42:38.418260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.704 [2024-11-19 06:42:38.418272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.704 [2024-11-19 06:42:38.418284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.704 [2024-11-19 06:42:38.418291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.704 [2024-11-19 06:42:38.418343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.704 [2024-11-19 06:42:38.418354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.704 [2024-11-19 06:42:38.418364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.704 [2024-11-19 06:42:38.418372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.704 [2024-11-19 06:42:38.418522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.810 ms, result 0 00:18:46.704 true 00:18:46.704 06:42:38 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74647 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74647 ']' 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74647 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74647 00:18:46.704 killing process with pid 74647 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74647' 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 74647 00:18:46.704 06:42:38 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 74647 00:18:50.913 06:42:42 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:54.210 262144+0 records in 00:18:54.210 262144+0 records out 00:18:54.210 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.86916 s, 278 MB/s 00:18:54.210 06:42:45 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:56.124 06:42:47 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:56.124 [2024-11-19 06:42:48.018651] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:56.124 [2024-11-19 06:42:48.018766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74878 ] 00:18:56.386 [2024-11-19 06:42:48.177981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.386 [2024-11-19 06:42:48.293713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.961 [2024-11-19 06:42:48.581707] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:56.961 [2024-11-19 06:42:48.581787] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:56.961 [2024-11-19 06:42:48.743045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.743103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:56.961 [2024-11-19 06:42:48.743125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:56.961 [2024-11-19 06:42:48.743134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.743186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.743198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:56.961 [2024-11-19 06:42:48.743209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:56.961 [2024-11-19 06:42:48.743217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.743238] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:56.961 [2024-11-19 06:42:48.744038] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:56.961 [2024-11-19 06:42:48.744062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.744071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:56.961 [2024-11-19 06:42:48.744080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:18:56.961 [2024-11-19 06:42:48.744089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.745740] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:56.961 [2024-11-19 06:42:48.759878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.760092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:56.961 [2024-11-19 06:42:48.760116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.140 ms 00:18:56.961 [2024-11-19 06:42:48.760124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.760275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.760301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:56.961 [2024-11-19 06:42:48.760312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:56.961 [2024-11-19 06:42:48.760320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.768251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.768292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:56.961 [2024-11-19 06:42:48.768303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.851 ms 00:18:56.961 [2024-11-19 06:42:48.768311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.768397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.768406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:56.961 [2024-11-19 06:42:48.768416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:56.961 [2024-11-19 06:42:48.768423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.768465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.768475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:56.961 [2024-11-19 06:42:48.768484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:56.961 [2024-11-19 06:42:48.768491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.768515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:56.961 [2024-11-19 06:42:48.772605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.772645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:56.961 [2024-11-19 06:42:48.772656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.096 ms 00:18:56.961 [2024-11-19 06:42:48.772666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.772704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.772712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:56.961 [2024-11-19 06:42:48.772720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:56.961 [2024-11-19 06:42:48.772728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.961 [2024-11-19 06:42:48.772779] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:56.961 [2024-11-19 06:42:48.772802] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:56.961 [2024-11-19 06:42:48.772838] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:56.961 [2024-11-19 06:42:48.772858] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:56.961 [2024-11-19 06:42:48.772986] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:56.961 [2024-11-19 06:42:48.772998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:56.961 [2024-11-19 06:42:48.773010] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:56.961 [2024-11-19 06:42:48.773021] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:56.961 [2024-11-19 06:42:48.773031] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:56.961 [2024-11-19 06:42:48.773040] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:56.961 [2024-11-19 06:42:48.773048] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:56.961 [2024-11-19 06:42:48.773056] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:56.961 [2024-11-19 06:42:48.773064] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:56.961 [2024-11-19 06:42:48.773076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.961 [2024-11-19 06:42:48.773083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:56.962 [2024-11-19 06:42:48.773091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:56.962 [2024-11-19 06:42:48.773098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.962 [2024-11-19 06:42:48.773182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.962 [2024-11-19 06:42:48.773191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:56.962 [2024-11-19 06:42:48.773198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:56.962 [2024-11-19 06:42:48.773207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.962 [2024-11-19 06:42:48.773309] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:56.962 [2024-11-19 06:42:48.773323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:56.962 [2024-11-19 06:42:48.773331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:56.962 [2024-11-19 06:42:48.773339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:56.962 [2024-11-19 06:42:48.773354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:56.962 [2024-11-19 06:42:48.773368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:56.962 [2024-11-19 06:42:48.773376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:56.962 [2024-11-19 06:42:48.773391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:56.962 [2024-11-19 06:42:48.773397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:56.962 [2024-11-19 06:42:48.773404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:56.962 [2024-11-19 06:42:48.773411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:56.962 [2024-11-19 06:42:48.773418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:56.962 [2024-11-19 06:42:48.773431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:56.962 [2024-11-19 06:42:48.773445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:56.962 [2024-11-19 06:42:48.773452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:56.962 [2024-11-19 06:42:48.773466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:56.962 [2024-11-19 06:42:48.773479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:56.962 [2024-11-19 06:42:48.773486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:56.962 [2024-11-19 06:42:48.773499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:56.962 [2024-11-19 06:42:48.773506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:56.962 [2024-11-19 06:42:48.773520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:56.962 [2024-11-19 06:42:48.773527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:56.962 [2024-11-19 06:42:48.773540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:56.962 [2024-11-19 06:42:48.773546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:56.962 [2024-11-19 06:42:48.773560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:56.962 [2024-11-19 06:42:48.773566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:56.962 [2024-11-19 06:42:48.773572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:56.962 [2024-11-19 06:42:48.773578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:56.962 [2024-11-19 06:42:48.773585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:56.962 [2024-11-19 06:42:48.773591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:56.962 [2024-11-19 06:42:48.773607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:56.962 [2024-11-19 06:42:48.773614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773621] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:56.962 [2024-11-19 06:42:48.773629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:56.962 [2024-11-19 06:42:48.773636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:56.962 [2024-11-19 06:42:48.773644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.962 [2024-11-19 06:42:48.773652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:56.962 [2024-11-19 06:42:48.773659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:56.962 [2024-11-19 06:42:48.773666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:56.962 [2024-11-19 06:42:48.773673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:56.962 [2024-11-19 06:42:48.773680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:56.962 [2024-11-19 06:42:48.773686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:56.962 [2024-11-19 06:42:48.773695] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:56.962 [2024-11-19 06:42:48.773704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:56.962 [2024-11-19 06:42:48.773713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:56.962 [2024-11-19 06:42:48.773720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:56.962 [2024-11-19 06:42:48.773728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:56.962 [2024-11-19 06:42:48.773735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:56.962 [2024-11-19 06:42:48.773742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:56.962 [2024-11-19 06:42:48.773750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:56.962 [2024-11-19 06:42:48.773757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:56.962 [2024-11-19 06:42:48.773764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:56.962 [2024-11-19 06:42:48.773772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:56.962 [2024-11-19 06:42:48.773779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:56.962 [2024-11-19 06:42:48.773787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:56.962 [2024-11-19 06:42:48.773794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:56.962 [2024-11-19 06:42:48.773801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:56.962 [2024-11-19 06:42:48.773809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:56.962 [2024-11-19 06:42:48.773816] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:56.962 [2024-11-19 06:42:48.773826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:56.962 [2024-11-19 06:42:48.773834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:56.962 [2024-11-19 06:42:48.773844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:56.962 [2024-11-19 06:42:48.773853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:56.962 [2024-11-19 06:42:48.773862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:56.962 [2024-11-19 06:42:48.773870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.962 [2024-11-19 06:42:48.773879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:56.962 [2024-11-19 06:42:48.773887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:18:56.962 [2024-11-19 06:42:48.773894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.962 [2024-11-19 06:42:48.805544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.962 [2024-11-19 06:42:48.805593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.962 [2024-11-19 06:42:48.805605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.594 ms 00:18:56.962 [2024-11-19 06:42:48.805613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.962 [2024-11-19 06:42:48.805710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.962 [2024-11-19 06:42:48.805720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:56.962 [2024-11-19 06:42:48.805729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:56.962 [2024-11-19 06:42:48.805737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.962 [2024-11-19 06:42:48.850302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.962 [2024-11-19 06:42:48.850517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.962 [2024-11-19 06:42:48.850540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.508 ms 00:18:56.962 [2024-11-19 06:42:48.850550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.962 [2024-11-19 06:42:48.850600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.962 [2024-11-19 06:42:48.850610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.963 [2024-11-19 06:42:48.850620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:56.963 [2024-11-19 06:42:48.850633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.963 [2024-11-19 06:42:48.851240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.963 [2024-11-19 06:42:48.851272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.963 [2024-11-19 06:42:48.851284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:18:56.963 [2024-11-19 06:42:48.851293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.963 [2024-11-19 06:42:48.851451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.963 [2024-11-19 06:42:48.851477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.963 [2024-11-19 06:42:48.851486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:18:56.963 [2024-11-19 06:42:48.851497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.963 [2024-11-19 06:42:48.867020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.963 [2024-11-19 06:42:48.867062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.963 [2024-11-19 06:42:48.867077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.503 ms 00:18:56.963 [2024-11-19 06:42:48.867085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.963 [2024-11-19 06:42:48.880945] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:56.963 [2024-11-19 06:42:48.881124] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:56.963 [2024-11-19 06:42:48.881143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.963 [2024-11-19 06:42:48.881151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:56.963 [2024-11-19 06:42:48.881161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.954 ms 00:18:56.963 [2024-11-19 06:42:48.881169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:48.907082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:48.907130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:57.224 [2024-11-19 06:42:48.907150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.758 ms 00:18:57.224 [2024-11-19 06:42:48.907159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:48.919749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:48.919800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:57.224 [2024-11-19 06:42:48.919812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.555 ms 00:18:57.224 [2024-11-19 06:42:48.919819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:48.932182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:48.932226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:57.224 [2024-11-19 06:42:48.932238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.319 ms 00:18:57.224 [2024-11-19 06:42:48.932246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:48.932888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:48.932911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:57.224 [2024-11-19 06:42:48.932937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:18:57.224 [2024-11-19 06:42:48.932946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:48.997448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:48.997515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:57.224 [2024-11-19 06:42:48.997532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.479 ms 00:18:57.224 [2024-11-19 06:42:48.997549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:49.008843] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:57.224 [2024-11-19 06:42:49.011750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:49.011934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:57.224 [2024-11-19 06:42:49.011955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.143 ms 00:18:57.224 [2024-11-19 06:42:49.011964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:49.012050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:49.012062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:57.224 [2024-11-19 06:42:49.012072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:57.224 [2024-11-19 06:42:49.012080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:49.012155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:49.012167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:57.224 [2024-11-19 06:42:49.012176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:57.224 [2024-11-19 06:42:49.012185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:49.012204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:49.012214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:57.224 [2024-11-19 06:42:49.012223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:57.224 [2024-11-19 06:42:49.012231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:49.012265] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:57.224 [2024-11-19 06:42:49.012276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:49.012287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:57.224 [2024-11-19 06:42:49.012295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:57.224 [2024-11-19 06:42:49.012303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:49.037664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:49.037711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:57.224 [2024-11-19 06:42:49.037725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.343 ms 00:18:57.224 [2024-11-19 06:42:49.037733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:49.037827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.224 [2024-11-19 06:42:49.037838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:57.224 [2024-11-19 06:42:49.037847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:57.224 [2024-11-19 06:42:49.037856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.224 [2024-11-19 06:42:49.039245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.694 ms, result 0 00:18:58.166  [2024-11-19T06:42:51.475Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-19T06:42:52.419Z] Copying: 36/1024 [MB] (24 MBps) [2024-11-19T06:42:53.359Z] Copying: 50/1024 [MB] (14 MBps) [2024-11-19T06:42:54.298Z] Copying: 68/1024 [MB] (17 MBps) [2024-11-19T06:42:55.244Z] Copying: 103/1024 [MB] (35 MBps) [2024-11-19T06:42:56.186Z] Copying: 118/1024 [MB] (15 MBps) [2024-11-19T06:42:57.126Z] Copying: 133/1024 [MB] (15 MBps) [2024-11-19T06:42:58.065Z] Copying: 143/1024 [MB] (10 MBps) [2024-11-19T06:42:59.449Z] Copying: 158/1024 [MB] (14 MBps) [2024-11-19T06:43:00.388Z] Copying: 169/1024 [MB] (11 MBps) [2024-11-19T06:43:01.331Z] Copying: 186/1024 [MB] (16 MBps) [2024-11-19T06:43:02.273Z] Copying: 205/1024 [MB] (18 MBps) [2024-11-19T06:43:03.214Z] Copying: 216/1024 [MB] (11 MBps) [2024-11-19T06:43:04.149Z] Copying: 232/1024 [MB] (16 MBps) [2024-11-19T06:43:05.093Z] Copying: 278/1024 [MB] (45 MBps) [2024-11-19T06:43:06.479Z] Copying: 301/1024 [MB] (23 MBps) [2024-11-19T06:43:07.126Z] Copying: 315/1024 [MB] (14 MBps) [2024-11-19T06:43:08.070Z] Copying: 327/1024 [MB] (11 MBps) [2024-11-19T06:43:09.459Z] Copying: 342/1024 [MB] (14 MBps) [2024-11-19T06:43:10.403Z] Copying: 355/1024 [MB] (12 MBps) [2024-11-19T06:43:11.346Z] Copying: 367/1024 [MB] (12 MBps) [2024-11-19T06:43:12.288Z] Copying: 382/1024 [MB] (15 MBps) [2024-11-19T06:43:13.230Z] Copying: 399/1024 [MB] (16 MBps) [2024-11-19T06:43:14.173Z] Copying: 413/1024 [MB] (14 MBps) [2024-11-19T06:43:15.113Z] Copying: 427/1024 [MB] (13 MBps) [2024-11-19T06:43:16.056Z] Copying: 439/1024 [MB] (11 MBps) [2024-11-19T06:43:17.440Z] Copying: 451/1024 [MB] (12 MBps) [2024-11-19T06:43:18.383Z] Copying: 466/1024 [MB] (14 MBps) [2024-11-19T06:43:19.326Z] Copying: 481/1024 [MB] (15 MBps) [2024-11-19T06:43:20.267Z] Copying: 496/1024 [MB] (15 MBps) [2024-11-19T06:43:21.209Z] Copying: 511/1024 [MB] (14 MBps) [2024-11-19T06:43:22.149Z] Copying: 527/1024 [MB] (15 MBps) [2024-11-19T06:43:23.090Z] Copying: 537/1024 [MB] (10 MBps) [2024-11-19T06:43:24.466Z] Copying: 560496/1048576 [kB] (10200 kBps) [2024-11-19T06:43:25.400Z] Copying: 581/1024 [MB] (34 MBps) [2024-11-19T06:43:26.334Z] Copying: 624/1024 [MB] (42 MBps) [2024-11-19T06:43:27.267Z] Copying: 655/1024 [MB] (31 MBps) [2024-11-19T06:43:28.201Z] Copying: 688/1024 [MB] (32 MBps) [2024-11-19T06:43:29.135Z] Copying: 713/1024 [MB] (25 MBps) [2024-11-19T06:43:30.067Z] Copying: 748/1024 [MB] (34 MBps) [2024-11-19T06:43:31.444Z] Copying: 784/1024 [MB] (36 MBps) [2024-11-19T06:43:32.388Z] Copying: 820/1024 [MB] (35 MBps) [2024-11-19T06:43:33.323Z] Copying: 833/1024 [MB] (12 MBps) [2024-11-19T06:43:34.263Z] Copying: 855/1024 [MB] (22 MBps) [2024-11-19T06:43:35.199Z] Copying: 878/1024 [MB] (22 MBps) [2024-11-19T06:43:36.219Z] Copying: 901/1024 [MB] (22 MBps) [2024-11-19T06:43:37.158Z] Copying: 935/1024 [MB] (33 MBps) [2024-11-19T06:43:38.096Z] Copying: 966/1024 [MB] (30 MBps) [2024-11-19T06:43:39.471Z] Copying: 984/1024 [MB] (18 MBps) [2024-11-19T06:43:39.471Z] Copying: 1017/1024 [MB] (32 MBps) [2024-11-19T06:43:39.471Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-19 06:43:39.210690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.542 [2024-11-19 06:43:39.210727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.542 [2024-11-19 06:43:39.210738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:47.542 [2024-11-19 06:43:39.210745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.542 [2024-11-19 06:43:39.210761] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.542 [2024-11-19 06:43:39.212939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.542 [2024-11-19 06:43:39.212961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.542 [2024-11-19 06:43:39.212970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:19:47.542 [2024-11-19 06:43:39.212977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.542 [2024-11-19 06:43:39.214280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.542 [2024-11-19 06:43:39.214303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.542 [2024-11-19 06:43:39.214310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:19:47.542 [2024-11-19 06:43:39.214317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.542 [2024-11-19 06:43:39.226027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.542 [2024-11-19 06:43:39.226052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.542 [2024-11-19 06:43:39.226060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.699 ms 00:19:47.542 [2024-11-19 06:43:39.226066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.542 [2024-11-19 06:43:39.230823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.542 [2024-11-19 06:43:39.230848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:47.542 [2024-11-19 06:43:39.230856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.735 ms 00:19:47.542 [2024-11-19 06:43:39.230862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.542 [2024-11-19 06:43:39.249053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.542 [2024-11-19 06:43:39.249077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.543 [2024-11-19 06:43:39.249085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.153 ms 00:19:47.543 [2024-11-19 06:43:39.249091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.543 [2024-11-19 06:43:39.260216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.543 [2024-11-19 06:43:39.260330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.543 [2024-11-19 06:43:39.260344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.102 ms 00:19:47.543 [2024-11-19 06:43:39.260350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.543 [2024-11-19 06:43:39.260439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.543 [2024-11-19 06:43:39.260447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.543 [2024-11-19 06:43:39.260455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:47.543 [2024-11-19 06:43:39.260461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.543 [2024-11-19 06:43:39.278981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.543 [2024-11-19 06:43:39.279081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:47.543 [2024-11-19 06:43:39.279094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.511 ms 00:19:47.543 [2024-11-19 06:43:39.279099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.543 [2024-11-19 06:43:39.297425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.543 [2024-11-19 06:43:39.297448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:47.543 [2024-11-19 06:43:39.297462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.305 ms 00:19:47.543 [2024-11-19 06:43:39.297467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.543 [2024-11-19 06:43:39.314579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.543 [2024-11-19 06:43:39.314677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.543 [2024-11-19 06:43:39.314689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.085 ms 00:19:47.543 [2024-11-19 06:43:39.314694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.543 [2024-11-19 06:43:39.331866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.543 [2024-11-19 06:43:39.331890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.543 [2024-11-19 06:43:39.331897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.134 ms 00:19:47.543 [2024-11-19 06:43:39.331903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.543 [2024-11-19 06:43:39.331936] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.543 [2024-11-19 06:43:39.331947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.331954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.331960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.331966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.331972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.331978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.331983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.331989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.331995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.543 [2024-11-19 06:43:39.332468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.544 [2024-11-19 06:43:39.332473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.544 [2024-11-19 06:43:39.332478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.544 [2024-11-19 06:43:39.332484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.544 [2024-11-19 06:43:39.332490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.544 [2024-11-19 06:43:39.332495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.544 [2024-11-19 06:43:39.332501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.544 [2024-11-19 06:43:39.332506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.544 [2024-11-19 06:43:39.332519] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.544 [2024-11-19 06:43:39.332527] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f4e8d2c-3c6e-43e9-8154-ef14d4021b95 00:19:47.544 [2024-11-19 06:43:39.332533] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.544 [2024-11-19 06:43:39.332540] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.544 [2024-11-19 06:43:39.332546] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.544 [2024-11-19 06:43:39.332552] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.544 [2024-11-19 06:43:39.332557] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.544 [2024-11-19 06:43:39.332562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.544 [2024-11-19 06:43:39.332568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.544 [2024-11-19 06:43:39.332577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.544 [2024-11-19 06:43:39.332583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.544 [2024-11-19 06:43:39.332588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.544 [2024-11-19 06:43:39.332593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.544 [2024-11-19 06:43:39.332600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:19:47.544 [2024-11-19 06:43:39.332605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.544 [2024-11-19 06:43:39.341858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.544 [2024-11-19 06:43:39.341965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.544 [2024-11-19 06:43:39.341977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.242 ms 00:19:47.544 [2024-11-19 06:43:39.341982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.544 [2024-11-19 06:43:39.342242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.544 [2024-11-19 06:43:39.342253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.544 [2024-11-19 06:43:39.342260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:47.544 [2024-11-19 06:43:39.342265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.544 [2024-11-19 06:43:39.368030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.544 [2024-11-19 06:43:39.368056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.544 [2024-11-19 06:43:39.368064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.544 [2024-11-19 06:43:39.368070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.544 [2024-11-19 06:43:39.368109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.544 [2024-11-19 06:43:39.368115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.544 [2024-11-19 06:43:39.368121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.544 [2024-11-19 06:43:39.368127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.544 [2024-11-19 06:43:39.368180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.544 [2024-11-19 06:43:39.368188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.544 [2024-11-19 06:43:39.368193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.544 [2024-11-19 06:43:39.368199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.544 [2024-11-19 06:43:39.368209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.544 [2024-11-19 06:43:39.368215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.544 [2024-11-19 06:43:39.368221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.544 [2024-11-19 06:43:39.368226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.544 [2024-11-19 06:43:39.427401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.544 [2024-11-19 06:43:39.427431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.544 [2024-11-19 06:43:39.427440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.544 [2024-11-19 06:43:39.427446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.802 [2024-11-19 06:43:39.476538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.802 [2024-11-19 06:43:39.476670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.802 [2024-11-19 06:43:39.476682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.802 [2024-11-19 06:43:39.476689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.802 [2024-11-19 06:43:39.476727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.802 [2024-11-19 06:43:39.476739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.802 [2024-11-19 06:43:39.476745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.802 [2024-11-19 06:43:39.476751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.802 [2024-11-19 06:43:39.476791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.802 [2024-11-19 06:43:39.476799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.802 [2024-11-19 06:43:39.476805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.802 [2024-11-19 06:43:39.476810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.802 [2024-11-19 06:43:39.476882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.802 [2024-11-19 06:43:39.476891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.802 [2024-11-19 06:43:39.476898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.802 [2024-11-19 06:43:39.476903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.802 [2024-11-19 06:43:39.476943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.802 [2024-11-19 06:43:39.476951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.802 [2024-11-19 06:43:39.476957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.802 [2024-11-19 06:43:39.476963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.802 [2024-11-19 06:43:39.476989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.802 [2024-11-19 06:43:39.476996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.802 [2024-11-19 06:43:39.477004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.802 [2024-11-19 06:43:39.477009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.802 [2024-11-19 06:43:39.477041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.802 [2024-11-19 06:43:39.477048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.802 [2024-11-19 06:43:39.477054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.802 [2024-11-19 06:43:39.477060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.802 [2024-11-19 06:43:39.477145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.432 ms, result 0 00:19:48.371 00:19:48.371 00:19:48.371 06:43:40 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:48.371 [2024-11-19 06:43:40.175437] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:48.371 [2024-11-19 06:43:40.175568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75422 ] 00:19:48.630 [2024-11-19 06:43:40.332499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.630 [2024-11-19 06:43:40.406680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.889 [2024-11-19 06:43:40.609478] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:48.889 [2024-11-19 06:43:40.609528] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:48.889 [2024-11-19 06:43:40.756274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.889 [2024-11-19 06:43:40.756310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:48.889 [2024-11-19 06:43:40.756324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:48.889 [2024-11-19 06:43:40.756330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.889 [2024-11-19 06:43:40.756363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.889 [2024-11-19 06:43:40.756371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.889 [2024-11-19 06:43:40.756379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:48.889 [2024-11-19 06:43:40.756384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.889 [2024-11-19 06:43:40.756397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:48.889 [2024-11-19 06:43:40.756942] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:48.889 [2024-11-19 06:43:40.756954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.889 [2024-11-19 06:43:40.756960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.889 [2024-11-19 06:43:40.756967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:19:48.889 [2024-11-19 06:43:40.756972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.889 [2024-11-19 06:43:40.757851] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:48.889 [2024-11-19 06:43:40.767551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.889 [2024-11-19 06:43:40.767579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:48.889 [2024-11-19 06:43:40.767587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.701 ms 00:19:48.889 [2024-11-19 06:43:40.767593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.889 [2024-11-19 06:43:40.767634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.889 [2024-11-19 06:43:40.767642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:48.889 [2024-11-19 06:43:40.767648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:48.889 [2024-11-19 06:43:40.767653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.889 [2024-11-19 06:43:40.771875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.889 [2024-11-19 06:43:40.771900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.889 [2024-11-19 06:43:40.771907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.185 ms 00:19:48.889 [2024-11-19 06:43:40.771913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.889 [2024-11-19 06:43:40.771978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.889 [2024-11-19 06:43:40.771985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.889 [2024-11-19 06:43:40.771991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:48.889 [2024-11-19 06:43:40.771997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.889 [2024-11-19 06:43:40.772036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.889 [2024-11-19 06:43:40.772043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:48.889 [2024-11-19 06:43:40.772049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:48.889 [2024-11-19 06:43:40.772055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.889 [2024-11-19 06:43:40.772068] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.890 [2024-11-19 06:43:40.774728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.890 [2024-11-19 06:43:40.774751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.890 [2024-11-19 06:43:40.774757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.663 ms 00:19:48.890 [2024-11-19 06:43:40.774765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.890 [2024-11-19 06:43:40.774790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.890 [2024-11-19 06:43:40.774796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:48.890 [2024-11-19 06:43:40.774803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:48.890 [2024-11-19 06:43:40.774808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.890 [2024-11-19 06:43:40.774821] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:48.890 [2024-11-19 06:43:40.774835] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:48.890 [2024-11-19 06:43:40.774861] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:48.890 [2024-11-19 06:43:40.774874] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:48.890 [2024-11-19 06:43:40.774960] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:48.890 [2024-11-19 06:43:40.774968] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:48.890 [2024-11-19 06:43:40.774976] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:48.890 [2024-11-19 06:43:40.774984] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:48.890 [2024-11-19 06:43:40.774991] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:48.890 [2024-11-19 06:43:40.774997] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:48.890 [2024-11-19 06:43:40.775003] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:48.890 [2024-11-19 06:43:40.775008] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:48.890 [2024-11-19 06:43:40.775013] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:48.890 [2024-11-19 06:43:40.775021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.890 [2024-11-19 06:43:40.775027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:48.890 [2024-11-19 06:43:40.775032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:19:48.890 [2024-11-19 06:43:40.775038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.890 [2024-11-19 06:43:40.775099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.890 [2024-11-19 06:43:40.775106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:48.890 [2024-11-19 06:43:40.775111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:48.890 [2024-11-19 06:43:40.775117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.890 [2024-11-19 06:43:40.775191] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:48.890 [2024-11-19 06:43:40.775200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:48.890 [2024-11-19 06:43:40.775206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.890 [2024-11-19 06:43:40.775212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:48.890 [2024-11-19 06:43:40.775223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:48.890 [2024-11-19 06:43:40.775234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:48.890 [2024-11-19 06:43:40.775240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.890 [2024-11-19 06:43:40.775250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:48.890 [2024-11-19 06:43:40.775255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:48.890 [2024-11-19 06:43:40.775260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.890 [2024-11-19 06:43:40.775265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:48.890 [2024-11-19 06:43:40.775271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:48.890 [2024-11-19 06:43:40.775280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:48.890 [2024-11-19 06:43:40.775290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:48.890 [2024-11-19 06:43:40.775295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:48.890 [2024-11-19 06:43:40.775306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.890 [2024-11-19 06:43:40.775316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:48.890 [2024-11-19 06:43:40.775321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.890 [2024-11-19 06:43:40.775331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:48.890 [2024-11-19 06:43:40.775336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.890 [2024-11-19 06:43:40.775346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:48.890 [2024-11-19 06:43:40.775350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.890 [2024-11-19 06:43:40.775360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:48.890 [2024-11-19 06:43:40.775365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.890 [2024-11-19 06:43:40.775375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:48.890 [2024-11-19 06:43:40.775380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:48.890 [2024-11-19 06:43:40.775384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.890 [2024-11-19 06:43:40.775389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:48.890 [2024-11-19 06:43:40.775394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:48.890 [2024-11-19 06:43:40.775399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:48.890 [2024-11-19 06:43:40.775408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:48.890 [2024-11-19 06:43:40.775413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775419] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:48.890 [2024-11-19 06:43:40.775425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:48.890 [2024-11-19 06:43:40.775430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.890 [2024-11-19 06:43:40.775437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.890 [2024-11-19 06:43:40.775443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:48.890 [2024-11-19 06:43:40.775448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:48.890 [2024-11-19 06:43:40.775453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:48.890 [2024-11-19 06:43:40.775458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:48.890 [2024-11-19 06:43:40.775463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:48.890 [2024-11-19 06:43:40.775484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:48.890 [2024-11-19 06:43:40.775490] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:48.890 [2024-11-19 06:43:40.775497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.890 [2024-11-19 06:43:40.775504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:48.890 [2024-11-19 06:43:40.775509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:48.890 [2024-11-19 06:43:40.775515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:48.890 [2024-11-19 06:43:40.775520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:48.890 [2024-11-19 06:43:40.775525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:48.890 [2024-11-19 06:43:40.775530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:48.890 [2024-11-19 06:43:40.775536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:48.890 [2024-11-19 06:43:40.775541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:48.890 [2024-11-19 06:43:40.775547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:48.890 [2024-11-19 06:43:40.775552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:48.890 [2024-11-19 06:43:40.775557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:48.890 [2024-11-19 06:43:40.775563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:48.890 [2024-11-19 06:43:40.775568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:48.890 [2024-11-19 06:43:40.775573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:48.891 [2024-11-19 06:43:40.775579] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:48.891 [2024-11-19 06:43:40.775586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.891 [2024-11-19 06:43:40.775593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:48.891 [2024-11-19 06:43:40.775599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:48.891 [2024-11-19 06:43:40.775604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:48.891 [2024-11-19 06:43:40.775609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:48.891 [2024-11-19 06:43:40.775615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.891 [2024-11-19 06:43:40.775620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:48.891 [2024-11-19 06:43:40.775626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:19:48.891 [2024-11-19 06:43:40.775632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.891 [2024-11-19 06:43:40.796296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.891 [2024-11-19 06:43:40.796398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.891 [2024-11-19 06:43:40.796441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.632 ms 00:19:48.891 [2024-11-19 06:43:40.796459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.891 [2024-11-19 06:43:40.796536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.891 [2024-11-19 06:43:40.796552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:48.891 [2024-11-19 06:43:40.796567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:48.891 [2024-11-19 06:43:40.796581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.149 [2024-11-19 06:43:40.832485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.149 [2024-11-19 06:43:40.832597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.149 [2024-11-19 06:43:40.832651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.858 ms 00:19:49.149 [2024-11-19 06:43:40.832670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.149 [2024-11-19 06:43:40.832709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.149 [2024-11-19 06:43:40.832728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.149 [2024-11-19 06:43:40.832743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:49.149 [2024-11-19 06:43:40.832760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.149 [2024-11-19 06:43:40.833083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.149 [2024-11-19 06:43:40.833160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.149 [2024-11-19 06:43:40.833222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:19:49.149 [2024-11-19 06:43:40.833242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.149 [2024-11-19 06:43:40.833353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.149 [2024-11-19 06:43:40.833646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.149 [2024-11-19 06:43:40.833706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:49.149 [2024-11-19 06:43:40.833725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.149 [2024-11-19 06:43:40.844163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.149 [2024-11-19 06:43:40.844247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.149 [2024-11-19 06:43:40.844285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.395 ms 00:19:49.149 [2024-11-19 06:43:40.844306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.149 [2024-11-19 06:43:40.854024] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:49.149 [2024-11-19 06:43:40.854122] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:49.149 [2024-11-19 06:43:40.854171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.149 [2024-11-19 06:43:40.854188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:49.149 [2024-11-19 06:43:40.854203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.785 ms 00:19:49.149 [2024-11-19 06:43:40.854217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.149 [2024-11-19 06:43:40.872631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.149 [2024-11-19 06:43:40.872725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:49.149 [2024-11-19 06:43:40.872764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.343 ms 00:19:49.149 [2024-11-19 06:43:40.872781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.149 [2024-11-19 06:43:40.881878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.881977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:49.150 [2024-11-19 06:43:40.882018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.061 ms 00:19:49.150 [2024-11-19 06:43:40.882034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.890820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.890902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:49.150 [2024-11-19 06:43:40.890961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.757 ms 00:19:49.150 [2024-11-19 06:43:40.890978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.891425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.891500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:49.150 [2024-11-19 06:43:40.891539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:19:49.150 [2024-11-19 06:43:40.891559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.935092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.935204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:49.150 [2024-11-19 06:43:40.935299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.508 ms 00:19:49.150 [2024-11-19 06:43:40.935323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.943069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:49.150 [2024-11-19 06:43:40.944840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.944918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:49.150 [2024-11-19 06:43:40.944970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.478 ms 00:19:49.150 [2024-11-19 06:43:40.944989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.945055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.945202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:49.150 [2024-11-19 06:43:40.945222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:49.150 [2024-11-19 06:43:40.945241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.945298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.945316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:49.150 [2024-11-19 06:43:40.945364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:49.150 [2024-11-19 06:43:40.945380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.945407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.945423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:49.150 [2024-11-19 06:43:40.945438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.150 [2024-11-19 06:43:40.945452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.945527] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:49.150 [2024-11-19 06:43:40.945592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.945610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:49.150 [2024-11-19 06:43:40.945641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:49.150 [2024-11-19 06:43:40.945657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.963083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.963168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:49.150 [2024-11-19 06:43:40.963212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.400 ms 00:19:49.150 [2024-11-19 06:43:40.963232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.963290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.150 [2024-11-19 06:43:40.963380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:49.150 [2024-11-19 06:43:40.963399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:49.150 [2024-11-19 06:43:40.963413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.150 [2024-11-19 06:43:40.964179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 207.575 ms, result 0 00:19:50.539  [2024-11-19T06:43:43.405Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-19T06:43:44.347Z] Copying: 49/1024 [MB] (27 MBps) [2024-11-19T06:43:45.292Z] Copying: 73/1024 [MB] (23 MBps) [2024-11-19T06:43:46.235Z] Copying: 88/1024 [MB] (15 MBps) [2024-11-19T06:43:47.180Z] Copying: 103/1024 [MB] (15 MBps) [2024-11-19T06:43:48.123Z] Copying: 116/1024 [MB] (12 MBps) [2024-11-19T06:43:49.505Z] Copying: 126/1024 [MB] (10 MBps) [2024-11-19T06:43:50.448Z] Copying: 142/1024 [MB] (15 MBps) [2024-11-19T06:43:51.391Z] Copying: 163/1024 [MB] (21 MBps) [2024-11-19T06:43:52.328Z] Copying: 174/1024 [MB] (10 MBps) [2024-11-19T06:43:53.273Z] Copying: 195/1024 [MB] (20 MBps) [2024-11-19T06:43:54.213Z] Copying: 207/1024 [MB] (12 MBps) [2024-11-19T06:43:55.153Z] Copying: 218/1024 [MB] (10 MBps) [2024-11-19T06:43:56.098Z] Copying: 228/1024 [MB] (10 MBps) [2024-11-19T06:43:57.484Z] Copying: 239/1024 [MB] (11 MBps) [2024-11-19T06:43:58.425Z] Copying: 254/1024 [MB] (14 MBps) [2024-11-19T06:43:59.364Z] Copying: 270/1024 [MB] (16 MBps) [2024-11-19T06:44:00.307Z] Copying: 281/1024 [MB] (10 MBps) [2024-11-19T06:44:01.247Z] Copying: 293/1024 [MB] (12 MBps) [2024-11-19T06:44:02.191Z] Copying: 316/1024 [MB] (22 MBps) [2024-11-19T06:44:03.133Z] Copying: 332/1024 [MB] (16 MBps) [2024-11-19T06:44:04.133Z] Copying: 344/1024 [MB] (11 MBps) [2024-11-19T06:44:05.523Z] Copying: 359/1024 [MB] (14 MBps) [2024-11-19T06:44:06.461Z] Copying: 378/1024 [MB] (18 MBps) [2024-11-19T06:44:07.403Z] Copying: 399/1024 [MB] (20 MBps) [2024-11-19T06:44:08.342Z] Copying: 419/1024 [MB] (20 MBps) [2024-11-19T06:44:09.284Z] Copying: 432/1024 [MB] (12 MBps) [2024-11-19T06:44:10.228Z] Copying: 443/1024 [MB] (10 MBps) [2024-11-19T06:44:11.172Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-19T06:44:12.113Z] Copying: 465/1024 [MB] (11 MBps) [2024-11-19T06:44:13.497Z] Copying: 479/1024 [MB] (14 MBps) [2024-11-19T06:44:14.441Z] Copying: 494/1024 [MB] (15 MBps) [2024-11-19T06:44:15.381Z] Copying: 516/1024 [MB] (21 MBps) [2024-11-19T06:44:16.326Z] Copying: 529/1024 [MB] (13 MBps) [2024-11-19T06:44:17.275Z] Copying: 540/1024 [MB] (10 MBps) [2024-11-19T06:44:18.209Z] Copying: 554/1024 [MB] (14 MBps) [2024-11-19T06:44:19.151Z] Copying: 575/1024 [MB] (20 MBps) [2024-11-19T06:44:20.535Z] Copying: 592/1024 [MB] (16 MBps) [2024-11-19T06:44:21.107Z] Copying: 608/1024 [MB] (16 MBps) [2024-11-19T06:44:22.486Z] Copying: 618/1024 [MB] (10 MBps) [2024-11-19T06:44:23.429Z] Copying: 634/1024 [MB] (15 MBps) [2024-11-19T06:44:24.372Z] Copying: 644/1024 [MB] (10 MBps) [2024-11-19T06:44:25.317Z] Copying: 656/1024 [MB] (11 MBps) [2024-11-19T06:44:26.258Z] Copying: 667/1024 [MB] (10 MBps) [2024-11-19T06:44:27.201Z] Copying: 678/1024 [MB] (11 MBps) [2024-11-19T06:44:28.144Z] Copying: 690/1024 [MB] (11 MBps) [2024-11-19T06:44:29.526Z] Copying: 700/1024 [MB] (10 MBps) [2024-11-19T06:44:30.101Z] Copying: 715/1024 [MB] (14 MBps) [2024-11-19T06:44:31.484Z] Copying: 729/1024 [MB] (14 MBps) [2024-11-19T06:44:32.421Z] Copying: 743/1024 [MB] (13 MBps) [2024-11-19T06:44:33.420Z] Copying: 758/1024 [MB] (14 MBps) [2024-11-19T06:44:34.361Z] Copying: 769/1024 [MB] (11 MBps) [2024-11-19T06:44:35.306Z] Copying: 782/1024 [MB] (12 MBps) [2024-11-19T06:44:36.250Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-19T06:44:37.193Z] Copying: 805/1024 [MB] (12 MBps) [2024-11-19T06:44:38.137Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-19T06:44:39.525Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-19T06:44:40.467Z] Copying: 844/1024 [MB] (17 MBps) [2024-11-19T06:44:41.413Z] Copying: 854/1024 [MB] (10 MBps) [2024-11-19T06:44:42.359Z] Copying: 866/1024 [MB] (12 MBps) [2024-11-19T06:44:43.303Z] Copying: 877/1024 [MB] (10 MBps) [2024-11-19T06:44:44.249Z] Copying: 887/1024 [MB] (10 MBps) [2024-11-19T06:44:45.195Z] Copying: 898/1024 [MB] (11 MBps) [2024-11-19T06:44:46.139Z] Copying: 910/1024 [MB] (11 MBps) [2024-11-19T06:44:47.527Z] Copying: 925/1024 [MB] (15 MBps) [2024-11-19T06:44:48.100Z] Copying: 943/1024 [MB] (18 MBps) [2024-11-19T06:44:49.489Z] Copying: 962/1024 [MB] (18 MBps) [2024-11-19T06:44:50.431Z] Copying: 982/1024 [MB] (19 MBps) [2024-11-19T06:44:51.376Z] Copying: 999/1024 [MB] (17 MBps) [2024-11-19T06:44:51.636Z] Copying: 1015/1024 [MB] (16 MBps) [2024-11-19T06:44:51.899Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-19 06:44:51.648296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.970 [2024-11-19 06:44:51.648372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:59.970 [2024-11-19 06:44:51.648388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:59.970 [2024-11-19 06:44:51.648398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.970 [2024-11-19 06:44:51.648423] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.970 [2024-11-19 06:44:51.652325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.970 [2024-11-19 06:44:51.652388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:59.970 [2024-11-19 06:44:51.652406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.883 ms 00:20:59.970 [2024-11-19 06:44:51.652416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.970 [2024-11-19 06:44:51.652658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.970 [2024-11-19 06:44:51.652671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:59.970 [2024-11-19 06:44:51.652680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:20:59.970 [2024-11-19 06:44:51.652689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.970 [2024-11-19 06:44:51.656334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.970 [2024-11-19 06:44:51.656446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:59.970 [2024-11-19 06:44:51.656505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.628 ms 00:20:59.970 [2024-11-19 06:44:51.656530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.970 [2024-11-19 06:44:51.662795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.970 [2024-11-19 06:44:51.662958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:59.970 [2024-11-19 06:44:51.663026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.220 ms 00:20:59.970 [2024-11-19 06:44:51.663049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.970 [2024-11-19 06:44:51.689778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.971 [2024-11-19 06:44:51.689958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:59.971 [2024-11-19 06:44:51.690152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.645 ms 00:20:59.971 [2024-11-19 06:44:51.690192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.971 [2024-11-19 06:44:51.706901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.971 [2024-11-19 06:44:51.707099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:59.971 [2024-11-19 06:44:51.707171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.651 ms 00:20:59.971 [2024-11-19 06:44:51.707195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.971 [2024-11-19 06:44:51.707435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.971 [2024-11-19 06:44:51.708038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:59.971 [2024-11-19 06:44:51.708097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:59.971 [2024-11-19 06:44:51.708121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.971 [2024-11-19 06:44:51.734272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.971 [2024-11-19 06:44:51.734439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:59.971 [2024-11-19 06:44:51.734499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.104 ms 00:20:59.971 [2024-11-19 06:44:51.734521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.971 [2024-11-19 06:44:51.760036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.971 [2024-11-19 06:44:51.760224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:59.971 [2024-11-19 06:44:51.760281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.262 ms 00:20:59.971 [2024-11-19 06:44:51.760303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.971 [2024-11-19 06:44:51.785175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.971 [2024-11-19 06:44:51.785332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:59.971 [2024-11-19 06:44:51.785389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.754 ms 00:20:59.971 [2024-11-19 06:44:51.785413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.971 [2024-11-19 06:44:51.810310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.971 [2024-11-19 06:44:51.810464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:59.971 [2024-11-19 06:44:51.810520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.763 ms 00:20:59.971 [2024-11-19 06:44:51.810542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.971 [2024-11-19 06:44:51.810587] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:59.971 [2024-11-19 06:44:51.810617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.810657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.810688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.810716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.810792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.810823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.810887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.810916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:59.971 [2024-11-19 06:44:51.811633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.811993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.812001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.812009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:59.972 [2024-11-19 06:44:51.812027] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:59.972 [2024-11-19 06:44:51.812041] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f4e8d2c-3c6e-43e9-8154-ef14d4021b95 00:20:59.972 [2024-11-19 06:44:51.812052] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:59.972 [2024-11-19 06:44:51.812060] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:59.972 [2024-11-19 06:44:51.812068] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:59.972 [2024-11-19 06:44:51.812076] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:59.972 [2024-11-19 06:44:51.812086] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:59.972 [2024-11-19 06:44:51.812095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:59.972 [2024-11-19 06:44:51.812110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:59.972 [2024-11-19 06:44:51.812118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:59.972 [2024-11-19 06:44:51.812125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:59.972 [2024-11-19 06:44:51.812134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.972 [2024-11-19 06:44:51.812143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:59.972 [2024-11-19 06:44:51.812153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:20:59.972 [2024-11-19 06:44:51.812163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.972 [2024-11-19 06:44:51.825761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.972 [2024-11-19 06:44:51.825803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:59.972 [2024-11-19 06:44:51.825816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.559 ms 00:20:59.972 [2024-11-19 06:44:51.825824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.972 [2024-11-19 06:44:51.826246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.972 [2024-11-19 06:44:51.826266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:59.972 [2024-11-19 06:44:51.826276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:20:59.972 [2024-11-19 06:44:51.826292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.972 [2024-11-19 06:44:51.862548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.972 [2024-11-19 06:44:51.862593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:59.972 [2024-11-19 06:44:51.862606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.972 [2024-11-19 06:44:51.862615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.972 [2024-11-19 06:44:51.862675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.972 [2024-11-19 06:44:51.862686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:59.972 [2024-11-19 06:44:51.862696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.972 [2024-11-19 06:44:51.862711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.972 [2024-11-19 06:44:51.862798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.972 [2024-11-19 06:44:51.862812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:59.972 [2024-11-19 06:44:51.862823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.972 [2024-11-19 06:44:51.862833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.972 [2024-11-19 06:44:51.862851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.973 [2024-11-19 06:44:51.862861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:59.973 [2024-11-19 06:44:51.862871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.973 [2024-11-19 06:44:51.862879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.234 [2024-11-19 06:44:51.947618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.234 [2024-11-19 06:44:51.947672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.234 [2024-11-19 06:44:51.947685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.234 [2024-11-19 06:44:51.947694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.234 [2024-11-19 06:44:52.017249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.234 [2024-11-19 06:44:52.017301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.234 [2024-11-19 06:44:52.017315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.234 [2024-11-19 06:44:52.017324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.234 [2024-11-19 06:44:52.017390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.234 [2024-11-19 06:44:52.017400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.234 [2024-11-19 06:44:52.017409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.234 [2024-11-19 06:44:52.017417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.234 [2024-11-19 06:44:52.017483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.234 [2024-11-19 06:44:52.017495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.234 [2024-11-19 06:44:52.017504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.234 [2024-11-19 06:44:52.017514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.234 [2024-11-19 06:44:52.017613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.234 [2024-11-19 06:44:52.017624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.234 [2024-11-19 06:44:52.017634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.234 [2024-11-19 06:44:52.017642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.234 [2024-11-19 06:44:52.017673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.234 [2024-11-19 06:44:52.017683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.234 [2024-11-19 06:44:52.017694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.234 [2024-11-19 06:44:52.017702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.234 [2024-11-19 06:44:52.017745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.234 [2024-11-19 06:44:52.017758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.234 [2024-11-19 06:44:52.017767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.234 [2024-11-19 06:44:52.017775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.234 [2024-11-19 06:44:52.017825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.234 [2024-11-19 06:44:52.017837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.234 [2024-11-19 06:44:52.017846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.234 [2024-11-19 06:44:52.017854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.234 [2024-11-19 06:44:52.018029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.695 ms, result 0 00:21:00.807 00:21:00.807 00:21:01.068 06:44:52 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:03.615 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:03.615 06:44:54 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:03.615 [2024-11-19 06:44:55.064683] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:03.615 [2024-11-19 06:44:55.064853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76193 ] 00:21:03.615 [2024-11-19 06:44:55.233022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.615 [2024-11-19 06:44:55.350537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.876 [2024-11-19 06:44:55.637765] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:03.876 [2024-11-19 06:44:55.637844] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:03.876 [2024-11-19 06:44:55.798861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.876 [2024-11-19 06:44:55.798943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:03.876 [2024-11-19 06:44:55.798967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:03.876 [2024-11-19 06:44:55.798976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.876 [2024-11-19 06:44:55.799031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.876 [2024-11-19 06:44:55.799043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.876 [2024-11-19 06:44:55.799054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:03.876 [2024-11-19 06:44:55.799063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.876 [2024-11-19 06:44:55.799084] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:03.876 [2024-11-19 06:44:55.799858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:03.876 [2024-11-19 06:44:55.799891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.876 [2024-11-19 06:44:55.799900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.876 [2024-11-19 06:44:55.799910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:21:03.877 [2024-11-19 06:44:55.799919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.877 [2024-11-19 06:44:55.801593] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:04.139 [2024-11-19 06:44:55.815918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.139 [2024-11-19 06:44:55.815984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:04.139 [2024-11-19 06:44:55.815998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.327 ms 00:21:04.139 [2024-11-19 06:44:55.816006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.139 [2024-11-19 06:44:55.816085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.139 [2024-11-19 06:44:55.816096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:04.139 [2024-11-19 06:44:55.816105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:04.139 [2024-11-19 06:44:55.816113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.139 [2024-11-19 06:44:55.824005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.139 [2024-11-19 06:44:55.824042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.139 [2024-11-19 06:44:55.824054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.815 ms 00:21:04.139 [2024-11-19 06:44:55.824063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.139 [2024-11-19 06:44:55.824148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.139 [2024-11-19 06:44:55.824157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.139 [2024-11-19 06:44:55.824166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:04.139 [2024-11-19 06:44:55.824174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.139 [2024-11-19 06:44:55.824218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.139 [2024-11-19 06:44:55.824228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:04.139 [2024-11-19 06:44:55.824236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:04.139 [2024-11-19 06:44:55.824244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.139 [2024-11-19 06:44:55.824267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.139 [2024-11-19 06:44:55.828214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.139 [2024-11-19 06:44:55.828251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.139 [2024-11-19 06:44:55.828262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.952 ms 00:21:04.139 [2024-11-19 06:44:55.828273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.139 [2024-11-19 06:44:55.828308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.139 [2024-11-19 06:44:55.828317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:04.139 [2024-11-19 06:44:55.828326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:04.139 [2024-11-19 06:44:55.828334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.139 [2024-11-19 06:44:55.828384] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:04.139 [2024-11-19 06:44:55.828406] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:04.139 [2024-11-19 06:44:55.828444] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:04.139 [2024-11-19 06:44:55.828463] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:04.139 [2024-11-19 06:44:55.828568] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:04.139 [2024-11-19 06:44:55.828579] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:04.139 [2024-11-19 06:44:55.828590] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:04.139 [2024-11-19 06:44:55.828602] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:04.139 [2024-11-19 06:44:55.828611] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:04.139 [2024-11-19 06:44:55.828620] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:04.139 [2024-11-19 06:44:55.828628] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:04.139 [2024-11-19 06:44:55.828636] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:04.139 [2024-11-19 06:44:55.828645] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:04.139 [2024-11-19 06:44:55.828657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.139 [2024-11-19 06:44:55.828664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:04.139 [2024-11-19 06:44:55.828673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:21:04.139 [2024-11-19 06:44:55.828681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.140 [2024-11-19 06:44:55.828764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.140 [2024-11-19 06:44:55.828773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:04.140 [2024-11-19 06:44:55.828783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:04.140 [2024-11-19 06:44:55.828790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.140 [2024-11-19 06:44:55.828894] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:04.140 [2024-11-19 06:44:55.828907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:04.140 [2024-11-19 06:44:55.828916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.140 [2024-11-19 06:44:55.828949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.140 [2024-11-19 06:44:55.828957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:04.140 [2024-11-19 06:44:55.828964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:04.140 [2024-11-19 06:44:55.828971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:04.140 [2024-11-19 06:44:55.828980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:04.140 [2024-11-19 06:44:55.828987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:04.140 [2024-11-19 06:44:55.828994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.140 [2024-11-19 06:44:55.829001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:04.140 [2024-11-19 06:44:55.829007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:04.140 [2024-11-19 06:44:55.829014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.140 [2024-11-19 06:44:55.829021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:04.140 [2024-11-19 06:44:55.829029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:04.140 [2024-11-19 06:44:55.829042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:04.140 [2024-11-19 06:44:55.829057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:04.140 [2024-11-19 06:44:55.829064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:04.140 [2024-11-19 06:44:55.829078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.140 [2024-11-19 06:44:55.829091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:04.140 [2024-11-19 06:44:55.829098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.140 [2024-11-19 06:44:55.829111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:04.140 [2024-11-19 06:44:55.829117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.140 [2024-11-19 06:44:55.829130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:04.140 [2024-11-19 06:44:55.829137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.140 [2024-11-19 06:44:55.829149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:04.140 [2024-11-19 06:44:55.829156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.140 [2024-11-19 06:44:55.829169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:04.140 [2024-11-19 06:44:55.829175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:04.140 [2024-11-19 06:44:55.829182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.140 [2024-11-19 06:44:55.829188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:04.140 [2024-11-19 06:44:55.829195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:04.140 [2024-11-19 06:44:55.829201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:04.140 [2024-11-19 06:44:55.829215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:04.140 [2024-11-19 06:44:55.829221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829228] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:04.140 [2024-11-19 06:44:55.829236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:04.140 [2024-11-19 06:44:55.829243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.140 [2024-11-19 06:44:55.829251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.140 [2024-11-19 06:44:55.829262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:04.140 [2024-11-19 06:44:55.829270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:04.140 [2024-11-19 06:44:55.829284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:04.140 [2024-11-19 06:44:55.829291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:04.140 [2024-11-19 06:44:55.829298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:04.140 [2024-11-19 06:44:55.829305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:04.140 [2024-11-19 06:44:55.829314] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:04.140 [2024-11-19 06:44:55.829324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.140 [2024-11-19 06:44:55.829332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:04.140 [2024-11-19 06:44:55.829340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:04.140 [2024-11-19 06:44:55.829348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:04.140 [2024-11-19 06:44:55.829356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:04.140 [2024-11-19 06:44:55.829363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:04.140 [2024-11-19 06:44:55.829370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:04.140 [2024-11-19 06:44:55.829377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:04.140 [2024-11-19 06:44:55.829384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:04.140 [2024-11-19 06:44:55.829391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:04.140 [2024-11-19 06:44:55.829398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:04.140 [2024-11-19 06:44:55.829406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:04.140 [2024-11-19 06:44:55.829413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:04.140 [2024-11-19 06:44:55.829420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:04.140 [2024-11-19 06:44:55.829427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:04.140 [2024-11-19 06:44:55.829434] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:04.140 [2024-11-19 06:44:55.829446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.140 [2024-11-19 06:44:55.829455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:04.140 [2024-11-19 06:44:55.829462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:04.140 [2024-11-19 06:44:55.829470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:04.140 [2024-11-19 06:44:55.829477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:04.140 [2024-11-19 06:44:55.829484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.140 [2024-11-19 06:44:55.829492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:04.140 [2024-11-19 06:44:55.829499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:21:04.140 [2024-11-19 06:44:55.829506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.140 [2024-11-19 06:44:55.861035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.140 [2024-11-19 06:44:55.861081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.140 [2024-11-19 06:44:55.861092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.483 ms 00:21:04.140 [2024-11-19 06:44:55.861100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.140 [2024-11-19 06:44:55.861195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.140 [2024-11-19 06:44:55.861204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:04.140 [2024-11-19 06:44:55.861213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:04.140 [2024-11-19 06:44:55.861221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.140 [2024-11-19 06:44:55.912733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.140 [2024-11-19 06:44:55.912955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.140 [2024-11-19 06:44:55.912979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.455 ms 00:21:04.140 [2024-11-19 06:44:55.912988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.140 [2024-11-19 06:44:55.913039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.140 [2024-11-19 06:44:55.913049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.140 [2024-11-19 06:44:55.913059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.140 [2024-11-19 06:44:55.913073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.140 [2024-11-19 06:44:55.913636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.141 [2024-11-19 06:44:55.913659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.141 [2024-11-19 06:44:55.913671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:21:04.141 [2024-11-19 06:44:55.913680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.141 [2024-11-19 06:44:55.913831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.141 [2024-11-19 06:44:55.913842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.141 [2024-11-19 06:44:55.913851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:21:04.141 [2024-11-19 06:44:55.913865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.141 [2024-11-19 06:44:55.929527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.141 [2024-11-19 06:44:55.929568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.141 [2024-11-19 06:44:55.929583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.642 ms 00:21:04.141 [2024-11-19 06:44:55.929591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.141 [2024-11-19 06:44:55.944075] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:04.141 [2024-11-19 06:44:55.944257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:04.141 [2024-11-19 06:44:55.944276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.141 [2024-11-19 06:44:55.944285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:04.141 [2024-11-19 06:44:55.944296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.577 ms 00:21:04.141 [2024-11-19 06:44:55.944303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.141 [2024-11-19 06:44:55.970241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.141 [2024-11-19 06:44:55.970307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:04.141 [2024-11-19 06:44:55.970321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.621 ms 00:21:04.141 [2024-11-19 06:44:55.970330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.141 [2024-11-19 06:44:55.983041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.141 [2024-11-19 06:44:55.983226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:04.141 [2024-11-19 06:44:55.983247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.651 ms 00:21:04.141 [2024-11-19 06:44:55.983255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.141 [2024-11-19 06:44:55.995802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.141 [2024-11-19 06:44:55.995847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:04.141 [2024-11-19 06:44:55.995859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.437 ms 00:21:04.141 [2024-11-19 06:44:55.995867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.141 [2024-11-19 06:44:55.996529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.141 [2024-11-19 06:44:55.996561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:04.141 [2024-11-19 06:44:55.996572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:21:04.141 [2024-11-19 06:44:55.996584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.141 [2024-11-19 06:44:56.061830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.141 [2024-11-19 06:44:56.061887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:04.141 [2024-11-19 06:44:56.061908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.226 ms 00:21:04.141 [2024-11-19 06:44:56.061918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.403 [2024-11-19 06:44:56.073237] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:04.403 [2024-11-19 06:44:56.076386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.403 [2024-11-19 06:44:56.076429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:04.403 [2024-11-19 06:44:56.076443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.383 ms 00:21:04.403 [2024-11-19 06:44:56.076453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.403 [2024-11-19 06:44:56.076540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.403 [2024-11-19 06:44:56.076552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:04.403 [2024-11-19 06:44:56.076561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:04.403 [2024-11-19 06:44:56.076572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.403 [2024-11-19 06:44:56.076644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.403 [2024-11-19 06:44:56.076655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:04.403 [2024-11-19 06:44:56.076664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:04.403 [2024-11-19 06:44:56.076672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.403 [2024-11-19 06:44:56.076693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.403 [2024-11-19 06:44:56.076702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:04.403 [2024-11-19 06:44:56.076710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:04.403 [2024-11-19 06:44:56.076718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.403 [2024-11-19 06:44:56.076753] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:04.403 [2024-11-19 06:44:56.076767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.403 [2024-11-19 06:44:56.076776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:04.403 [2024-11-19 06:44:56.076784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:04.403 [2024-11-19 06:44:56.076792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.403 [2024-11-19 06:44:56.102679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.403 [2024-11-19 06:44:56.102843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:04.403 [2024-11-19 06:44:56.102907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.868 ms 00:21:04.403 [2024-11-19 06:44:56.102963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.403 [2024-11-19 06:44:56.103056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.403 [2024-11-19 06:44:56.103083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:04.403 [2024-11-19 06:44:56.103108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:04.403 [2024-11-19 06:44:56.103129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.403 [2024-11-19 06:44:56.104395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.041 ms, result 0 00:21:05.349  [2024-11-19T06:44:58.215Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-19T06:44:59.160Z] Copying: 45/1024 [MB] (33 MBps) [2024-11-19T06:45:00.542Z] Copying: 62/1024 [MB] (16 MBps) [2024-11-19T06:45:01.116Z] Copying: 80/1024 [MB] (18 MBps) [2024-11-19T06:45:02.132Z] Copying: 95/1024 [MB] (15 MBps) [2024-11-19T06:45:03.516Z] Copying: 109/1024 [MB] (13 MBps) [2024-11-19T06:45:04.460Z] Copying: 125/1024 [MB] (15 MBps) [2024-11-19T06:45:05.401Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-19T06:45:06.338Z] Copying: 151/1024 [MB] (15 MBps) [2024-11-19T06:45:07.273Z] Copying: 168/1024 [MB] (17 MBps) [2024-11-19T06:45:08.205Z] Copying: 205/1024 [MB] (37 MBps) [2024-11-19T06:45:09.138Z] Copying: 247/1024 [MB] (41 MBps) [2024-11-19T06:45:10.516Z] Copying: 279/1024 [MB] (31 MBps) [2024-11-19T06:45:11.453Z] Copying: 310/1024 [MB] (30 MBps) [2024-11-19T06:45:12.391Z] Copying: 324/1024 [MB] (14 MBps) [2024-11-19T06:45:13.334Z] Copying: 366/1024 [MB] (42 MBps) [2024-11-19T06:45:14.270Z] Copying: 377/1024 [MB] (10 MBps) [2024-11-19T06:45:15.214Z] Copying: 401/1024 [MB] (23 MBps) [2024-11-19T06:45:16.154Z] Copying: 416/1024 [MB] (15 MBps) [2024-11-19T06:45:17.526Z] Copying: 432/1024 [MB] (15 MBps) [2024-11-19T06:45:18.458Z] Copying: 465/1024 [MB] (32 MBps) [2024-11-19T06:45:19.401Z] Copying: 498/1024 [MB] (32 MBps) [2024-11-19T06:45:20.339Z] Copying: 530/1024 [MB] (32 MBps) [2024-11-19T06:45:21.272Z] Copying: 561/1024 [MB] (31 MBps) [2024-11-19T06:45:22.228Z] Copying: 592/1024 [MB] (30 MBps) [2024-11-19T06:45:23.165Z] Copying: 623/1024 [MB] (31 MBps) [2024-11-19T06:45:24.550Z] Copying: 659/1024 [MB] (35 MBps) [2024-11-19T06:45:25.124Z] Copying: 672/1024 [MB] (12 MBps) [2024-11-19T06:45:26.503Z] Copying: 688/1024 [MB] (15 MBps) [2024-11-19T06:45:27.442Z] Copying: 712/1024 [MB] (24 MBps) [2024-11-19T06:45:28.381Z] Copying: 756/1024 [MB] (43 MBps) [2024-11-19T06:45:29.328Z] Copying: 768/1024 [MB] (12 MBps) [2024-11-19T06:45:30.273Z] Copying: 800/1024 [MB] (31 MBps) [2024-11-19T06:45:31.253Z] Copying: 832/1024 [MB] (31 MBps) [2024-11-19T06:45:32.187Z] Copying: 842/1024 [MB] (10 MBps) [2024-11-19T06:45:33.131Z] Copying: 870/1024 [MB] (27 MBps) [2024-11-19T06:45:34.505Z] Copying: 902/1024 [MB] (31 MBps) [2024-11-19T06:45:35.444Z] Copying: 932/1024 [MB] (29 MBps) [2024-11-19T06:45:36.385Z] Copying: 960/1024 [MB] (28 MBps) [2024-11-19T06:45:37.318Z] Copying: 974/1024 [MB] (13 MBps) [2024-11-19T06:45:38.260Z] Copying: 1000/1024 [MB] (26 MBps) [2024-11-19T06:45:38.260Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-19 06:45:37.910613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:37.910676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:46.331 [2024-11-19 06:45:37.910692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:46.331 [2024-11-19 06:45:37.910701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:37.910723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:46.331 [2024-11-19 06:45:37.913795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:37.913835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:46.331 [2024-11-19 06:45:37.913854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.055 ms 00:21:46.331 [2024-11-19 06:45:37.913863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:37.916832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:37.916873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:46.331 [2024-11-19 06:45:37.916885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:21:46.331 [2024-11-19 06:45:37.916893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:37.936636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:37.936689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:46.331 [2024-11-19 06:45:37.936701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.726 ms 00:21:46.331 [2024-11-19 06:45:37.936710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:37.942878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:37.942911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:46.331 [2024-11-19 06:45:37.942931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.132 ms 00:21:46.331 [2024-11-19 06:45:37.942939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:37.969620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:37.969661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:46.331 [2024-11-19 06:45:37.969672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.632 ms 00:21:46.331 [2024-11-19 06:45:37.969680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:37.985726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:37.985765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:46.331 [2024-11-19 06:45:37.985777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.002 ms 00:21:46.331 [2024-11-19 06:45:37.985785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:37.987728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:37.987769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:46.331 [2024-11-19 06:45:37.987780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:21:46.331 [2024-11-19 06:45:37.987788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:38.013347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:38.013390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:46.331 [2024-11-19 06:45:38.013401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.544 ms 00:21:46.331 [2024-11-19 06:45:38.013414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:38.038502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:38.038565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:46.331 [2024-11-19 06:45:38.038575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.044 ms 00:21:46.331 [2024-11-19 06:45:38.038582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:38.063154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:38.063193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:46.331 [2024-11-19 06:45:38.063203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.528 ms 00:21:46.331 [2024-11-19 06:45:38.063209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:38.087519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.331 [2024-11-19 06:45:38.087560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:46.331 [2024-11-19 06:45:38.087570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.239 ms 00:21:46.331 [2024-11-19 06:45:38.087577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.331 [2024-11-19 06:45:38.087619] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:46.331 [2024-11-19 06:45:38.087635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 512 / 261120 wr_cnt: 1 state: open 00:21:46.331 [2024-11-19 06:45:38.087652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:46.331 [2024-11-19 06:45:38.087660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:46.331 [2024-11-19 06:45:38.087668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:46.331 [2024-11-19 06:45:38.087677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:46.331 [2024-11-19 06:45:38.087684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:46.331 [2024-11-19 06:45:38.087691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:46.331 [2024-11-19 06:45:38.087699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:46.331 [2024-11-19 06:45:38.087706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.087995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:46.332 [2024-11-19 06:45:38.088387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:46.333 [2024-11-19 06:45:38.088394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:46.333 [2024-11-19 06:45:38.088402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:46.333 [2024-11-19 06:45:38.088418] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:46.333 [2024-11-19 06:45:38.088429] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f4e8d2c-3c6e-43e9-8154-ef14d4021b95 00:21:46.333 [2024-11-19 06:45:38.088438] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 512 00:21:46.333 [2024-11-19 06:45:38.088445] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1472 00:21:46.333 [2024-11-19 06:45:38.088453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 512 00:21:46.333 [2024-11-19 06:45:38.088468] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.8750 00:21:46.333 [2024-11-19 06:45:38.088475] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:46.333 [2024-11-19 06:45:38.088483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:46.333 [2024-11-19 06:45:38.088497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:46.333 [2024-11-19 06:45:38.088504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:46.333 [2024-11-19 06:45:38.088510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:46.333 [2024-11-19 06:45:38.088517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.333 [2024-11-19 06:45:38.088525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:46.333 [2024-11-19 06:45:38.088534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:21:46.333 [2024-11-19 06:45:38.088541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.333 [2024-11-19 06:45:38.101946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.333 [2024-11-19 06:45:38.101977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:46.333 [2024-11-19 06:45:38.101988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.384 ms 00:21:46.333 [2024-11-19 06:45:38.101996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.333 [2024-11-19 06:45:38.102388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.333 [2024-11-19 06:45:38.102398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:46.333 [2024-11-19 06:45:38.102416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:21:46.333 [2024-11-19 06:45:38.102423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.333 [2024-11-19 06:45:38.138673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.333 [2024-11-19 06:45:38.138717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:46.333 [2024-11-19 06:45:38.138728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.333 [2024-11-19 06:45:38.138737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.333 [2024-11-19 06:45:38.138798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.333 [2024-11-19 06:45:38.138806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:46.333 [2024-11-19 06:45:38.138820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.333 [2024-11-19 06:45:38.138827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.333 [2024-11-19 06:45:38.138910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.333 [2024-11-19 06:45:38.138944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:46.333 [2024-11-19 06:45:38.138954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.333 [2024-11-19 06:45:38.138961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.333 [2024-11-19 06:45:38.138978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.333 [2024-11-19 06:45:38.138986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:46.333 [2024-11-19 06:45:38.138994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.333 [2024-11-19 06:45:38.139006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.333 [2024-11-19 06:45:38.223472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.333 [2024-11-19 06:45:38.223543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:46.333 [2024-11-19 06:45:38.223558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.333 [2024-11-19 06:45:38.223567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.594 [2024-11-19 06:45:38.292115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.594 [2024-11-19 06:45:38.292164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.594 [2024-11-19 06:45:38.292183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.594 [2024-11-19 06:45:38.292192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.594 [2024-11-19 06:45:38.292258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.594 [2024-11-19 06:45:38.292268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:46.594 [2024-11-19 06:45:38.292277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.594 [2024-11-19 06:45:38.292285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.594 [2024-11-19 06:45:38.292347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.594 [2024-11-19 06:45:38.292357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:46.594 [2024-11-19 06:45:38.292366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.594 [2024-11-19 06:45:38.292374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.594 [2024-11-19 06:45:38.292472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.594 [2024-11-19 06:45:38.292483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:46.594 [2024-11-19 06:45:38.292491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.594 [2024-11-19 06:45:38.292499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.594 [2024-11-19 06:45:38.292531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.594 [2024-11-19 06:45:38.292540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:46.594 [2024-11-19 06:45:38.292548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.594 [2024-11-19 06:45:38.292556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.594 [2024-11-19 06:45:38.292600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.594 [2024-11-19 06:45:38.292610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:46.594 [2024-11-19 06:45:38.292618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.594 [2024-11-19 06:45:38.292626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.594 [2024-11-19 06:45:38.292672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.594 [2024-11-19 06:45:38.292683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:46.594 [2024-11-19 06:45:38.292691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.594 [2024-11-19 06:45:38.292700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.594 [2024-11-19 06:45:38.292833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.185 ms, result 0 00:21:47.529 00:21:47.529 00:21:47.529 06:45:39 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:47.529 [2024-11-19 06:45:39.177389] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:47.529 [2024-11-19 06:45:39.177507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76647 ] 00:21:47.529 [2024-11-19 06:45:39.332865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.529 [2024-11-19 06:45:39.416499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.787 [2024-11-19 06:45:39.621214] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:47.787 [2024-11-19 06:45:39.621258] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.047 [2024-11-19 06:45:39.768192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.047 [2024-11-19 06:45:39.768225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:48.047 [2024-11-19 06:45:39.768239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:48.047 [2024-11-19 06:45:39.768245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.047 [2024-11-19 06:45:39.768278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.047 [2024-11-19 06:45:39.768286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.047 [2024-11-19 06:45:39.768294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:48.047 [2024-11-19 06:45:39.768299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.047 [2024-11-19 06:45:39.768312] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:48.047 [2024-11-19 06:45:39.768816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:48.047 [2024-11-19 06:45:39.768832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.047 [2024-11-19 06:45:39.768838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.047 [2024-11-19 06:45:39.768845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:21:48.047 [2024-11-19 06:45:39.768850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.047 [2024-11-19 06:45:39.769745] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:48.047 [2024-11-19 06:45:39.779292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.047 [2024-11-19 06:45:39.779317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:48.047 [2024-11-19 06:45:39.779326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.549 ms 00:21:48.047 [2024-11-19 06:45:39.779332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.047 [2024-11-19 06:45:39.779375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.047 [2024-11-19 06:45:39.779382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:48.047 [2024-11-19 06:45:39.779388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:48.047 [2024-11-19 06:45:39.779394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.047 [2024-11-19 06:45:39.783670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.047 [2024-11-19 06:45:39.783689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.047 [2024-11-19 06:45:39.783696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:21:48.047 [2024-11-19 06:45:39.783703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.047 [2024-11-19 06:45:39.783755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.047 [2024-11-19 06:45:39.783762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.047 [2024-11-19 06:45:39.783768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:48.047 [2024-11-19 06:45:39.783773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.047 [2024-11-19 06:45:39.783804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.047 [2024-11-19 06:45:39.783811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:48.047 [2024-11-19 06:45:39.783817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:48.047 [2024-11-19 06:45:39.783822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.047 [2024-11-19 06:45:39.783835] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:48.047 [2024-11-19 06:45:39.786499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.047 [2024-11-19 06:45:39.786519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.047 [2024-11-19 06:45:39.786526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.667 ms 00:21:48.048 [2024-11-19 06:45:39.786533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.048 [2024-11-19 06:45:39.786558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.048 [2024-11-19 06:45:39.786564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:48.048 [2024-11-19 06:45:39.786570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:48.048 [2024-11-19 06:45:39.786576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.048 [2024-11-19 06:45:39.786590] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:48.048 [2024-11-19 06:45:39.786603] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:48.048 [2024-11-19 06:45:39.786628] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:48.048 [2024-11-19 06:45:39.786641] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:48.048 [2024-11-19 06:45:39.786718] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:48.048 [2024-11-19 06:45:39.786730] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:48.048 [2024-11-19 06:45:39.786740] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:48.048 [2024-11-19 06:45:39.786748] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:48.048 [2024-11-19 06:45:39.786754] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:48.048 [2024-11-19 06:45:39.786760] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:48.048 [2024-11-19 06:45:39.786765] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:48.048 [2024-11-19 06:45:39.786771] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:48.048 [2024-11-19 06:45:39.786776] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:48.048 [2024-11-19 06:45:39.786784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.048 [2024-11-19 06:45:39.786789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:48.048 [2024-11-19 06:45:39.786795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:21:48.048 [2024-11-19 06:45:39.786800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.048 [2024-11-19 06:45:39.786863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.048 [2024-11-19 06:45:39.786869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:48.048 [2024-11-19 06:45:39.786875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:48.048 [2024-11-19 06:45:39.786881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.048 [2024-11-19 06:45:39.786964] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:48.048 [2024-11-19 06:45:39.786974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:48.048 [2024-11-19 06:45:39.786980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.048 [2024-11-19 06:45:39.786986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.048 [2024-11-19 06:45:39.786992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:48.048 [2024-11-19 06:45:39.786997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:48.048 [2024-11-19 06:45:39.787008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:48.048 [2024-11-19 06:45:39.787013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.048 [2024-11-19 06:45:39.787024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:48.048 [2024-11-19 06:45:39.787030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:48.048 [2024-11-19 06:45:39.787035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.048 [2024-11-19 06:45:39.787040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:48.048 [2024-11-19 06:45:39.787045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:48.048 [2024-11-19 06:45:39.787054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:48.048 [2024-11-19 06:45:39.787065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:48.048 [2024-11-19 06:45:39.787069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:48.048 [2024-11-19 06:45:39.787080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.048 [2024-11-19 06:45:39.787090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:48.048 [2024-11-19 06:45:39.787095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.048 [2024-11-19 06:45:39.787105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:48.048 [2024-11-19 06:45:39.787110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.048 [2024-11-19 06:45:39.787120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:48.048 [2024-11-19 06:45:39.787125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.048 [2024-11-19 06:45:39.787135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:48.048 [2024-11-19 06:45:39.787140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.048 [2024-11-19 06:45:39.787150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:48.048 [2024-11-19 06:45:39.787154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:48.048 [2024-11-19 06:45:39.787159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.048 [2024-11-19 06:45:39.787164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:48.048 [2024-11-19 06:45:39.787168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:48.048 [2024-11-19 06:45:39.787174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:48.048 [2024-11-19 06:45:39.787185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:48.048 [2024-11-19 06:45:39.787190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787195] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:48.048 [2024-11-19 06:45:39.787201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:48.048 [2024-11-19 06:45:39.787206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.048 [2024-11-19 06:45:39.787211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.048 [2024-11-19 06:45:39.787217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:48.048 [2024-11-19 06:45:39.787222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:48.048 [2024-11-19 06:45:39.787227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:48.048 [2024-11-19 06:45:39.787232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:48.048 [2024-11-19 06:45:39.787237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:48.048 [2024-11-19 06:45:39.787242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:48.048 [2024-11-19 06:45:39.787248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:48.048 [2024-11-19 06:45:39.787255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.048 [2024-11-19 06:45:39.787261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:48.048 [2024-11-19 06:45:39.787267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:48.048 [2024-11-19 06:45:39.787272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:48.048 [2024-11-19 06:45:39.787277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:48.048 [2024-11-19 06:45:39.787283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:48.049 [2024-11-19 06:45:39.787288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:48.049 [2024-11-19 06:45:39.787293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:48.049 [2024-11-19 06:45:39.787299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:48.049 [2024-11-19 06:45:39.787304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:48.049 [2024-11-19 06:45:39.787309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:48.049 [2024-11-19 06:45:39.787314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:48.049 [2024-11-19 06:45:39.787319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:48.049 [2024-11-19 06:45:39.787325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:48.049 [2024-11-19 06:45:39.787330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:48.049 [2024-11-19 06:45:39.787335] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:48.049 [2024-11-19 06:45:39.787343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.049 [2024-11-19 06:45:39.787349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:48.049 [2024-11-19 06:45:39.787355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:48.049 [2024-11-19 06:45:39.787360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:48.049 [2024-11-19 06:45:39.787366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:48.049 [2024-11-19 06:45:39.787372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.787377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:48.049 [2024-11-19 06:45:39.787383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:21:48.049 [2024-11-19 06:45:39.787388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.807999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.808021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.049 [2024-11-19 06:45:39.808029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.581 ms 00:21:48.049 [2024-11-19 06:45:39.808034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.808095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.808102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:48.049 [2024-11-19 06:45:39.808107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:48.049 [2024-11-19 06:45:39.808113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.848487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.848513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.049 [2024-11-19 06:45:39.848522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.335 ms 00:21:48.049 [2024-11-19 06:45:39.848528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.848551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.848558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.049 [2024-11-19 06:45:39.848564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:48.049 [2024-11-19 06:45:39.848573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.848877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.848896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.049 [2024-11-19 06:45:39.848903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:21:48.049 [2024-11-19 06:45:39.848909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.849014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.849021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.049 [2024-11-19 06:45:39.849028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:21:48.049 [2024-11-19 06:45:39.849033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.859427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.859448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.049 [2024-11-19 06:45:39.859456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.374 ms 00:21:48.049 [2024-11-19 06:45:39.859463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.869067] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:21:48.049 [2024-11-19 06:45:39.869091] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:48.049 [2024-11-19 06:45:39.869100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.869107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:48.049 [2024-11-19 06:45:39.869113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.554 ms 00:21:48.049 [2024-11-19 06:45:39.869119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.887578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.887604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:48.049 [2024-11-19 06:45:39.887612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.429 ms 00:21:48.049 [2024-11-19 06:45:39.887619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.896490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.896516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:48.049 [2024-11-19 06:45:39.896523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.843 ms 00:21:48.049 [2024-11-19 06:45:39.896529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.905038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.905059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:48.049 [2024-11-19 06:45:39.905066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.485 ms 00:21:48.049 [2024-11-19 06:45:39.905072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.905572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.905589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:48.049 [2024-11-19 06:45:39.905596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:21:48.049 [2024-11-19 06:45:39.905603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.948868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.948904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:48.049 [2024-11-19 06:45:39.948918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.252 ms 00:21:48.049 [2024-11-19 06:45:39.948930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.956778] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:48.049 [2024-11-19 06:45:39.958452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.958471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:48.049 [2024-11-19 06:45:39.958478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.486 ms 00:21:48.049 [2024-11-19 06:45:39.958484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.958539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.958548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:48.049 [2024-11-19 06:45:39.958555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:48.049 [2024-11-19 06:45:39.958562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.959025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.959046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:48.049 [2024-11-19 06:45:39.959054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:21:48.049 [2024-11-19 06:45:39.959060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.959077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.049 [2024-11-19 06:45:39.959083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:48.049 [2024-11-19 06:45:39.959089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:48.049 [2024-11-19 06:45:39.959095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.049 [2024-11-19 06:45:39.959130] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:48.049 [2024-11-19 06:45:39.959140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.050 [2024-11-19 06:45:39.959146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:48.050 [2024-11-19 06:45:39.959152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:48.050 [2024-11-19 06:45:39.959158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.050 [2024-11-19 06:45:39.976266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.050 [2024-11-19 06:45:39.976289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:48.050 [2024-11-19 06:45:39.976297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.094 ms 00:21:48.050 [2024-11-19 06:45:39.976306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.050 [2024-11-19 06:45:39.976360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.050 [2024-11-19 06:45:39.976367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:48.050 [2024-11-19 06:45:39.976374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:48.050 [2024-11-19 06:45:39.976379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.308 [2024-11-19 06:45:39.979195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 209.933 ms, result 0 00:21:49.246  [2024-11-19T06:45:42.558Z] Copying: 1052/1048576 [kB] (1052 kBps) [2024-11-19T06:45:43.130Z] Copying: 22/1024 [MB] (21 MBps) [2024-11-19T06:45:44.516Z] Copying: 40/1024 [MB] (18 MBps) [2024-11-19T06:45:45.460Z] Copying: 53/1024 [MB] (12 MBps) [2024-11-19T06:45:46.401Z] Copying: 69/1024 [MB] (15 MBps) [2024-11-19T06:45:47.341Z] Copying: 95/1024 [MB] (25 MBps) [2024-11-19T06:45:48.282Z] Copying: 115/1024 [MB] (19 MBps) [2024-11-19T06:45:49.225Z] Copying: 131/1024 [MB] (16 MBps) [2024-11-19T06:45:50.163Z] Copying: 147/1024 [MB] (16 MBps) [2024-11-19T06:45:51.543Z] Copying: 159/1024 [MB] (11 MBps) [2024-11-19T06:45:52.485Z] Copying: 174/1024 [MB] (15 MBps) [2024-11-19T06:45:53.424Z] Copying: 189/1024 [MB] (14 MBps) [2024-11-19T06:45:54.366Z] Copying: 203/1024 [MB] (14 MBps) [2024-11-19T06:45:55.305Z] Copying: 225/1024 [MB] (22 MBps) [2024-11-19T06:45:56.298Z] Copying: 255/1024 [MB] (29 MBps) [2024-11-19T06:45:57.238Z] Copying: 286/1024 [MB] (31 MBps) [2024-11-19T06:45:58.180Z] Copying: 309/1024 [MB] (23 MBps) [2024-11-19T06:45:59.190Z] Copying: 332/1024 [MB] (22 MBps) [2024-11-19T06:46:00.129Z] Copying: 353/1024 [MB] (21 MBps) [2024-11-19T06:46:01.514Z] Copying: 383/1024 [MB] (29 MBps) [2024-11-19T06:46:02.457Z] Copying: 398/1024 [MB] (15 MBps) [2024-11-19T06:46:03.403Z] Copying: 413/1024 [MB] (14 MBps) [2024-11-19T06:46:04.348Z] Copying: 425/1024 [MB] (12 MBps) [2024-11-19T06:46:05.290Z] Copying: 443/1024 [MB] (18 MBps) [2024-11-19T06:46:06.232Z] Copying: 467/1024 [MB] (23 MBps) [2024-11-19T06:46:07.172Z] Copying: 484/1024 [MB] (16 MBps) [2024-11-19T06:46:08.556Z] Copying: 502/1024 [MB] (17 MBps) [2024-11-19T06:46:09.129Z] Copying: 526/1024 [MB] (24 MBps) [2024-11-19T06:46:10.529Z] Copying: 538/1024 [MB] (12 MBps) [2024-11-19T06:46:11.473Z] Copying: 551/1024 [MB] (12 MBps) [2024-11-19T06:46:12.419Z] Copying: 563/1024 [MB] (12 MBps) [2024-11-19T06:46:13.362Z] Copying: 574/1024 [MB] (10 MBps) [2024-11-19T06:46:14.307Z] Copying: 585/1024 [MB] (10 MBps) [2024-11-19T06:46:15.250Z] Copying: 595/1024 [MB] (10 MBps) [2024-11-19T06:46:16.193Z] Copying: 606/1024 [MB] (10 MBps) [2024-11-19T06:46:17.138Z] Copying: 618/1024 [MB] (12 MBps) [2024-11-19T06:46:18.528Z] Copying: 628/1024 [MB] (10 MBps) [2024-11-19T06:46:19.471Z] Copying: 639/1024 [MB] (10 MBps) [2024-11-19T06:46:20.416Z] Copying: 652/1024 [MB] (12 MBps) [2024-11-19T06:46:21.362Z] Copying: 662/1024 [MB] (10 MBps) [2024-11-19T06:46:22.308Z] Copying: 672/1024 [MB] (10 MBps) [2024-11-19T06:46:23.251Z] Copying: 687/1024 [MB] (14 MBps) [2024-11-19T06:46:24.194Z] Copying: 697/1024 [MB] (10 MBps) [2024-11-19T06:46:25.134Z] Copying: 707/1024 [MB] (10 MBps) [2024-11-19T06:46:26.519Z] Copying: 725/1024 [MB] (17 MBps) [2024-11-19T06:46:27.462Z] Copying: 735/1024 [MB] (10 MBps) [2024-11-19T06:46:28.458Z] Copying: 752/1024 [MB] (17 MBps) [2024-11-19T06:46:29.411Z] Copying: 771/1024 [MB] (18 MBps) [2024-11-19T06:46:30.356Z] Copying: 782/1024 [MB] (10 MBps) [2024-11-19T06:46:31.382Z] Copying: 793/1024 [MB] (11 MBps) [2024-11-19T06:46:32.325Z] Copying: 817/1024 [MB] (24 MBps) [2024-11-19T06:46:33.271Z] Copying: 835/1024 [MB] (18 MBps) [2024-11-19T06:46:34.216Z] Copying: 851/1024 [MB] (16 MBps) [2024-11-19T06:46:35.154Z] Copying: 862/1024 [MB] (10 MBps) [2024-11-19T06:46:36.538Z] Copying: 879/1024 [MB] (16 MBps) [2024-11-19T06:46:37.478Z] Copying: 890/1024 [MB] (11 MBps) [2024-11-19T06:46:38.422Z] Copying: 905/1024 [MB] (14 MBps) [2024-11-19T06:46:39.365Z] Copying: 915/1024 [MB] (10 MBps) [2024-11-19T06:46:40.307Z] Copying: 933/1024 [MB] (17 MBps) [2024-11-19T06:46:41.255Z] Copying: 947/1024 [MB] (13 MBps) [2024-11-19T06:46:42.197Z] Copying: 958/1024 [MB] (10 MBps) [2024-11-19T06:46:43.136Z] Copying: 971/1024 [MB] (13 MBps) [2024-11-19T06:46:44.519Z] Copying: 988/1024 [MB] (17 MBps) [2024-11-19T06:46:45.464Z] Copying: 1000/1024 [MB] (11 MBps) [2024-11-19T06:46:45.725Z] Copying: 1010/1024 [MB] (10 MBps) [2024-11-19T06:46:46.298Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 06:46:46.122245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.369 [2024-11-19 06:46:46.122572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:54.369 [2024-11-19 06:46:46.122823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:54.369 [2024-11-19 06:46:46.122848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.369 [2024-11-19 06:46:46.122915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:54.369 [2024-11-19 06:46:46.126187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.369 [2024-11-19 06:46:46.126226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:54.369 [2024-11-19 06:46:46.126239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:22:54.369 [2024-11-19 06:46:46.126249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.369 [2024-11-19 06:46:46.126519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.369 [2024-11-19 06:46:46.126532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:54.369 [2024-11-19 06:46:46.126543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:22:54.369 [2024-11-19 06:46:46.126553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.369 [2024-11-19 06:46:46.139893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.369 [2024-11-19 06:46:46.139948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:54.369 [2024-11-19 06:46:46.139962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.316 ms 00:22:54.369 [2024-11-19 06:46:46.139971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.369 [2024-11-19 06:46:46.146203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.369 [2024-11-19 06:46:46.146237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:54.369 [2024-11-19 06:46:46.146249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.201 ms 00:22:54.369 [2024-11-19 06:46:46.146258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.369 [2024-11-19 06:46:46.172401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.369 [2024-11-19 06:46:46.172441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:54.369 [2024-11-19 06:46:46.172453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.073 ms 00:22:54.369 [2024-11-19 06:46:46.172460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.369 [2024-11-19 06:46:46.188159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.369 [2024-11-19 06:46:46.188207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:54.369 [2024-11-19 06:46:46.188220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.653 ms 00:22:54.369 [2024-11-19 06:46:46.188229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.630 [2024-11-19 06:46:46.471340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.630 [2024-11-19 06:46:46.471397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:54.630 [2024-11-19 06:46:46.471410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 283.057 ms 00:22:54.630 [2024-11-19 06:46:46.471419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.630 [2024-11-19 06:46:46.497430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.630 [2024-11-19 06:46:46.497467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:54.630 [2024-11-19 06:46:46.497479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.996 ms 00:22:54.630 [2024-11-19 06:46:46.497487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.631 [2024-11-19 06:46:46.522796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.631 [2024-11-19 06:46:46.522832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:54.631 [2024-11-19 06:46:46.522855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.265 ms 00:22:54.631 [2024-11-19 06:46:46.522863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.631 [2024-11-19 06:46:46.547169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.631 [2024-11-19 06:46:46.547204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:54.631 [2024-11-19 06:46:46.547215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.262 ms 00:22:54.631 [2024-11-19 06:46:46.547223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.893 [2024-11-19 06:46:46.571970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.893 [2024-11-19 06:46:46.572006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:54.893 [2024-11-19 06:46:46.572017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.679 ms 00:22:54.893 [2024-11-19 06:46:46.572025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.893 [2024-11-19 06:46:46.572067] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:54.893 [2024-11-19 06:46:46.572083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:22:54.893 [2024-11-19 06:46:46.572095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:54.893 [2024-11-19 06:46:46.572103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:54.893 [2024-11-19 06:46:46.572111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:54.893 [2024-11-19 06:46:46.572120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:54.893 [2024-11-19 06:46:46.572128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:54.893 [2024-11-19 06:46:46.572136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:54.893 [2024-11-19 06:46:46.572145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:54.893 [2024-11-19 06:46:46.572152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:54.894 [2024-11-19 06:46:46.572865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:54.895 [2024-11-19 06:46:46.572872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:54.895 [2024-11-19 06:46:46.572881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:54.895 [2024-11-19 06:46:46.572889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:54.895 [2024-11-19 06:46:46.572897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:54.895 [2024-11-19 06:46:46.572914] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:54.895 [2024-11-19 06:46:46.572937] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f4e8d2c-3c6e-43e9-8154-ef14d4021b95 00:22:54.895 [2024-11-19 06:46:46.572947] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:22:54.895 [2024-11-19 06:46:46.572955] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132288 00:22:54.895 [2024-11-19 06:46:46.572964] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131328 00:22:54.895 [2024-11-19 06:46:46.572974] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:22:54.895 [2024-11-19 06:46:46.572983] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:54.895 [2024-11-19 06:46:46.572997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:54.895 [2024-11-19 06:46:46.573005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:54.895 [2024-11-19 06:46:46.573019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:54.895 [2024-11-19 06:46:46.573026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:54.895 [2024-11-19 06:46:46.573034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.895 [2024-11-19 06:46:46.573045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:54.895 [2024-11-19 06:46:46.573054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:22:54.895 [2024-11-19 06:46:46.573062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.586537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.895 [2024-11-19 06:46:46.586570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:54.895 [2024-11-19 06:46:46.586581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.456 ms 00:22:54.895 [2024-11-19 06:46:46.586596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.587014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.895 [2024-11-19 06:46:46.587048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:54.895 [2024-11-19 06:46:46.587058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:22:54.895 [2024-11-19 06:46:46.587068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.623224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.623263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.895 [2024-11-19 06:46:46.623280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.623290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.623350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.623361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.895 [2024-11-19 06:46:46.623371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.623380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.623462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.623485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.895 [2024-11-19 06:46:46.623496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.623509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.623543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.623552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.895 [2024-11-19 06:46:46.623561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.623569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.706947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.706996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.895 [2024-11-19 06:46:46.707016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.707025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.775418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.775471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.895 [2024-11-19 06:46:46.775483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.775492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.775564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.775574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.895 [2024-11-19 06:46:46.775584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.775593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.775655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.775666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.895 [2024-11-19 06:46:46.775675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.775684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.775782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.775793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.895 [2024-11-19 06:46:46.775801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.775810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.775845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.775855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:54.895 [2024-11-19 06:46:46.775864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.775872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.775915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.775949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.895 [2024-11-19 06:46:46.775960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.775969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.776018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.895 [2024-11-19 06:46:46.776029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.895 [2024-11-19 06:46:46.776041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.895 [2024-11-19 06:46:46.776050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.895 [2024-11-19 06:46:46.776189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 653.906 ms, result 0 00:22:55.839 00:22:55.839 00:22:55.839 06:46:47 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:58.387 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74647 00:22:58.387 06:46:49 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74647 ']' 00:22:58.387 Process with pid 74647 is not found 00:22:58.387 Remove shared memory files 00:22:58.387 06:46:49 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74647 00:22:58.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74647) - No such process 00:22:58.387 06:46:49 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 74647 is not found' 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:58.387 06:46:49 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:58.387 00:22:58.387 real 4m20.766s 00:22:58.387 user 4m9.811s 00:22:58.387 sys 0m11.095s 00:22:58.387 06:46:49 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.387 ************************************ 00:22:58.387 END TEST ftl_restore 00:22:58.387 06:46:49 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:58.387 ************************************ 00:22:58.387 06:46:49 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:58.387 06:46:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:58.387 06:46:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.387 06:46:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:58.387 ************************************ 00:22:58.387 START TEST ftl_dirty_shutdown 00:22:58.387 ************************************ 00:22:58.387 06:46:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:58.387 * Looking for test storage... 00:22:58.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:58.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.387 --rc genhtml_branch_coverage=1 00:22:58.387 --rc genhtml_function_coverage=1 00:22:58.387 --rc genhtml_legend=1 00:22:58.387 --rc geninfo_all_blocks=1 00:22:58.387 --rc geninfo_unexecuted_blocks=1 00:22:58.387 00:22:58.387 ' 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:58.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.387 --rc genhtml_branch_coverage=1 00:22:58.387 --rc genhtml_function_coverage=1 00:22:58.387 --rc genhtml_legend=1 00:22:58.387 --rc geninfo_all_blocks=1 00:22:58.387 --rc geninfo_unexecuted_blocks=1 00:22:58.387 00:22:58.387 ' 00:22:58.387 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:58.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.387 --rc genhtml_branch_coverage=1 00:22:58.387 --rc genhtml_function_coverage=1 00:22:58.387 --rc genhtml_legend=1 00:22:58.387 --rc geninfo_all_blocks=1 00:22:58.387 --rc geninfo_unexecuted_blocks=1 00:22:58.387 00:22:58.388 ' 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:58.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.388 --rc genhtml_branch_coverage=1 00:22:58.388 --rc genhtml_function_coverage=1 00:22:58.388 --rc genhtml_legend=1 00:22:58.388 --rc geninfo_all_blocks=1 00:22:58.388 --rc geninfo_unexecuted_blocks=1 00:22:58.388 00:22:58.388 ' 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77442 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77442 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77442 ']' 00:22:58.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:58.388 06:46:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:58.388 [2024-11-19 06:46:50.283050] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:58.388 [2024-11-19 06:46:50.283204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77442 ] 00:22:58.650 [2024-11-19 06:46:50.449243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.650 [2024-11-19 06:46:50.570057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.594 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.594 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:59.594 06:46:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:59.594 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:59.594 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:59.594 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:59.594 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:59.594 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:59.855 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:59.855 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:59.855 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:59.855 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:59.855 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:59.855 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:59.855 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:59.855 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:59.855 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:59.855 { 00:22:59.855 "name": "nvme0n1", 00:22:59.855 "aliases": [ 00:22:59.855 "401b69b2-b601-49b9-a64a-4e296978c637" 00:22:59.855 ], 00:22:59.855 "product_name": "NVMe disk", 00:22:59.855 "block_size": 4096, 00:22:59.855 "num_blocks": 1310720, 00:22:59.855 "uuid": "401b69b2-b601-49b9-a64a-4e296978c637", 00:22:59.855 "numa_id": -1, 00:22:59.855 "assigned_rate_limits": { 00:22:59.855 "rw_ios_per_sec": 0, 00:22:59.856 "rw_mbytes_per_sec": 0, 00:22:59.856 "r_mbytes_per_sec": 0, 00:22:59.856 "w_mbytes_per_sec": 0 00:22:59.856 }, 00:22:59.856 "claimed": true, 00:22:59.856 "claim_type": "read_many_write_one", 00:22:59.856 "zoned": false, 00:22:59.856 "supported_io_types": { 00:22:59.856 "read": true, 00:22:59.856 "write": true, 00:22:59.856 "unmap": true, 00:22:59.856 "flush": true, 00:22:59.856 "reset": true, 00:22:59.856 "nvme_admin": true, 00:22:59.856 "nvme_io": true, 00:22:59.856 "nvme_io_md": false, 00:22:59.856 "write_zeroes": true, 00:22:59.856 "zcopy": false, 00:22:59.856 "get_zone_info": false, 00:22:59.856 "zone_management": false, 00:22:59.856 "zone_append": false, 00:22:59.856 "compare": true, 00:22:59.856 "compare_and_write": false, 00:22:59.856 "abort": true, 00:22:59.856 "seek_hole": false, 00:22:59.856 "seek_data": false, 00:22:59.856 "copy": true, 00:22:59.856 "nvme_iov_md": false 00:22:59.856 }, 00:22:59.856 "driver_specific": { 00:22:59.856 "nvme": [ 00:22:59.856 { 00:22:59.856 "pci_address": "0000:00:11.0", 00:22:59.856 "trid": { 00:22:59.856 "trtype": "PCIe", 00:22:59.856 "traddr": "0000:00:11.0" 00:22:59.856 }, 00:22:59.856 "ctrlr_data": { 00:22:59.856 "cntlid": 0, 00:22:59.856 "vendor_id": "0x1b36", 00:22:59.856 "model_number": "QEMU NVMe Ctrl", 00:22:59.856 "serial_number": "12341", 00:22:59.856 "firmware_revision": "8.0.0", 00:22:59.856 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:59.856 "oacs": { 00:22:59.856 "security": 0, 00:22:59.856 "format": 1, 00:22:59.856 "firmware": 0, 00:22:59.856 "ns_manage": 1 00:22:59.856 }, 00:22:59.856 "multi_ctrlr": false, 00:22:59.856 "ana_reporting": false 00:22:59.856 }, 00:22:59.856 "vs": { 00:22:59.856 "nvme_version": "1.4" 00:22:59.856 }, 00:22:59.856 "ns_data": { 00:22:59.856 "id": 1, 00:22:59.856 "can_share": false 00:22:59.856 } 00:22:59.856 } 00:22:59.856 ], 00:22:59.856 "mp_policy": "active_passive" 00:22:59.856 } 00:22:59.856 } 00:22:59.856 ]' 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:00.116 06:46:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:00.377 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=d497fe9f-02d0-4236-b996-0b65a0764120 00:23:00.377 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:00.377 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d497fe9f-02d0-4236-b996-0b65a0764120 00:23:00.638 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:00.638 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=7e084fbc-e64d-4698-bd82-81d41866e727 00:23:00.638 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7e084fbc-e64d-4698-bd82-81d41866e727 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:00.899 06:46:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:01.161 06:46:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:01.161 { 00:23:01.161 "name": "bea2a91f-4365-4b2b-ac06-00c2ef2c317e", 00:23:01.161 "aliases": [ 00:23:01.161 "lvs/nvme0n1p0" 00:23:01.161 ], 00:23:01.161 "product_name": "Logical Volume", 00:23:01.161 "block_size": 4096, 00:23:01.161 "num_blocks": 26476544, 00:23:01.161 "uuid": "bea2a91f-4365-4b2b-ac06-00c2ef2c317e", 00:23:01.161 "assigned_rate_limits": { 00:23:01.161 "rw_ios_per_sec": 0, 00:23:01.161 "rw_mbytes_per_sec": 0, 00:23:01.161 "r_mbytes_per_sec": 0, 00:23:01.161 "w_mbytes_per_sec": 0 00:23:01.161 }, 00:23:01.161 "claimed": false, 00:23:01.161 "zoned": false, 00:23:01.161 "supported_io_types": { 00:23:01.161 "read": true, 00:23:01.161 "write": true, 00:23:01.161 "unmap": true, 00:23:01.161 "flush": false, 00:23:01.161 "reset": true, 00:23:01.161 "nvme_admin": false, 00:23:01.161 "nvme_io": false, 00:23:01.161 "nvme_io_md": false, 00:23:01.161 "write_zeroes": true, 00:23:01.161 "zcopy": false, 00:23:01.161 "get_zone_info": false, 00:23:01.161 "zone_management": false, 00:23:01.161 "zone_append": false, 00:23:01.161 "compare": false, 00:23:01.161 "compare_and_write": false, 00:23:01.161 "abort": false, 00:23:01.161 "seek_hole": true, 00:23:01.161 "seek_data": true, 00:23:01.161 "copy": false, 00:23:01.161 "nvme_iov_md": false 00:23:01.161 }, 00:23:01.161 "driver_specific": { 00:23:01.161 "lvol": { 00:23:01.161 "lvol_store_uuid": "7e084fbc-e64d-4698-bd82-81d41866e727", 00:23:01.161 "base_bdev": "nvme0n1", 00:23:01.161 "thin_provision": true, 00:23:01.161 "num_allocated_clusters": 0, 00:23:01.161 "snapshot": false, 00:23:01.161 "clone": false, 00:23:01.161 "esnap_clone": false 00:23:01.161 } 00:23:01.161 } 00:23:01.161 } 00:23:01.161 ]' 00:23:01.161 06:46:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:01.161 06:46:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:01.161 06:46:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:01.161 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:01.161 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:01.161 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:01.161 06:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:01.161 06:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:01.161 06:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:01.422 06:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:01.423 06:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:01.423 06:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:01.423 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:01.423 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:01.423 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:01.423 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:01.423 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:01.682 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:01.682 { 00:23:01.682 "name": "bea2a91f-4365-4b2b-ac06-00c2ef2c317e", 00:23:01.682 "aliases": [ 00:23:01.682 "lvs/nvme0n1p0" 00:23:01.682 ], 00:23:01.682 "product_name": "Logical Volume", 00:23:01.682 "block_size": 4096, 00:23:01.682 "num_blocks": 26476544, 00:23:01.682 "uuid": "bea2a91f-4365-4b2b-ac06-00c2ef2c317e", 00:23:01.682 "assigned_rate_limits": { 00:23:01.682 "rw_ios_per_sec": 0, 00:23:01.682 "rw_mbytes_per_sec": 0, 00:23:01.682 "r_mbytes_per_sec": 0, 00:23:01.682 "w_mbytes_per_sec": 0 00:23:01.682 }, 00:23:01.682 "claimed": false, 00:23:01.682 "zoned": false, 00:23:01.682 "supported_io_types": { 00:23:01.682 "read": true, 00:23:01.682 "write": true, 00:23:01.682 "unmap": true, 00:23:01.682 "flush": false, 00:23:01.682 "reset": true, 00:23:01.682 "nvme_admin": false, 00:23:01.682 "nvme_io": false, 00:23:01.682 "nvme_io_md": false, 00:23:01.682 "write_zeroes": true, 00:23:01.682 "zcopy": false, 00:23:01.682 "get_zone_info": false, 00:23:01.682 "zone_management": false, 00:23:01.682 "zone_append": false, 00:23:01.682 "compare": false, 00:23:01.682 "compare_and_write": false, 00:23:01.682 "abort": false, 00:23:01.682 "seek_hole": true, 00:23:01.682 "seek_data": true, 00:23:01.682 "copy": false, 00:23:01.682 "nvme_iov_md": false 00:23:01.682 }, 00:23:01.682 "driver_specific": { 00:23:01.682 "lvol": { 00:23:01.682 "lvol_store_uuid": "7e084fbc-e64d-4698-bd82-81d41866e727", 00:23:01.682 "base_bdev": "nvme0n1", 00:23:01.682 "thin_provision": true, 00:23:01.682 "num_allocated_clusters": 0, 00:23:01.682 "snapshot": false, 00:23:01.682 "clone": false, 00:23:01.682 "esnap_clone": false 00:23:01.682 } 00:23:01.682 } 00:23:01.682 } 00:23:01.682 ]' 00:23:01.682 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:01.682 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:01.682 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:01.682 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:01.682 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:01.682 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:01.682 06:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:01.682 06:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:01.941 06:46:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:01.941 06:46:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:01.941 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:01.941 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:01.941 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:01.941 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:01.941 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bea2a91f-4365-4b2b-ac06-00c2ef2c317e 00:23:02.199 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:02.199 { 00:23:02.199 "name": "bea2a91f-4365-4b2b-ac06-00c2ef2c317e", 00:23:02.199 "aliases": [ 00:23:02.199 "lvs/nvme0n1p0" 00:23:02.199 ], 00:23:02.199 "product_name": "Logical Volume", 00:23:02.199 "block_size": 4096, 00:23:02.199 "num_blocks": 26476544, 00:23:02.199 "uuid": "bea2a91f-4365-4b2b-ac06-00c2ef2c317e", 00:23:02.199 "assigned_rate_limits": { 00:23:02.199 "rw_ios_per_sec": 0, 00:23:02.199 "rw_mbytes_per_sec": 0, 00:23:02.199 "r_mbytes_per_sec": 0, 00:23:02.199 "w_mbytes_per_sec": 0 00:23:02.199 }, 00:23:02.199 "claimed": false, 00:23:02.199 "zoned": false, 00:23:02.199 "supported_io_types": { 00:23:02.199 "read": true, 00:23:02.199 "write": true, 00:23:02.199 "unmap": true, 00:23:02.199 "flush": false, 00:23:02.199 "reset": true, 00:23:02.199 "nvme_admin": false, 00:23:02.199 "nvme_io": false, 00:23:02.199 "nvme_io_md": false, 00:23:02.199 "write_zeroes": true, 00:23:02.199 "zcopy": false, 00:23:02.199 "get_zone_info": false, 00:23:02.199 "zone_management": false, 00:23:02.199 "zone_append": false, 00:23:02.199 "compare": false, 00:23:02.199 "compare_and_write": false, 00:23:02.199 "abort": false, 00:23:02.199 "seek_hole": true, 00:23:02.199 "seek_data": true, 00:23:02.199 "copy": false, 00:23:02.199 "nvme_iov_md": false 00:23:02.199 }, 00:23:02.199 "driver_specific": { 00:23:02.199 "lvol": { 00:23:02.199 "lvol_store_uuid": "7e084fbc-e64d-4698-bd82-81d41866e727", 00:23:02.199 "base_bdev": "nvme0n1", 00:23:02.199 "thin_provision": true, 00:23:02.199 "num_allocated_clusters": 0, 00:23:02.199 "snapshot": false, 00:23:02.199 "clone": false, 00:23:02.199 "esnap_clone": false 00:23:02.199 } 00:23:02.199 } 00:23:02.199 } 00:23:02.199 ]' 00:23:02.199 06:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bea2a91f-4365-4b2b-ac06-00c2ef2c317e --l2p_dram_limit 10' 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:02.199 06:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bea2a91f-4365-4b2b-ac06-00c2ef2c317e --l2p_dram_limit 10 -c nvc0n1p0 00:23:02.460 [2024-11-19 06:46:54.247118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-19 06:46:54.247151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.460 [2024-11-19 06:46:54.247162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:02.460 [2024-11-19 06:46:54.247169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.460 [2024-11-19 06:46:54.247214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-19 06:46:54.247222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.460 [2024-11-19 06:46:54.247230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:02.460 [2024-11-19 06:46:54.247236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.460 [2024-11-19 06:46:54.247255] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.460 [2024-11-19 06:46:54.247851] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.460 [2024-11-19 06:46:54.247868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-19 06:46:54.247874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.460 [2024-11-19 06:46:54.247883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:23:02.460 [2024-11-19 06:46:54.247889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.460 [2024-11-19 06:46:54.247913] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c9a17d6d-b7e5-4bf1-931a-08f01203310e 00:23:02.460 [2024-11-19 06:46:54.248817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-19 06:46:54.248835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:02.460 [2024-11-19 06:46:54.248843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:02.460 [2024-11-19 06:46:54.248852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.460 [2024-11-19 06:46:54.253475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-19 06:46:54.253498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.460 [2024-11-19 06:46:54.253508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.586 ms 00:23:02.460 [2024-11-19 06:46:54.253515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.460 [2024-11-19 06:46:54.253610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-19 06:46:54.253620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.460 [2024-11-19 06:46:54.253627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:02.460 [2024-11-19 06:46:54.253637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.460 [2024-11-19 06:46:54.253672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-19 06:46:54.253681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.460 [2024-11-19 06:46:54.253687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.460 [2024-11-19 06:46:54.253696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.460 [2024-11-19 06:46:54.253713] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.460 [2024-11-19 06:46:54.256549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-19 06:46:54.256571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.460 [2024-11-19 06:46:54.256581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.838 ms 00:23:02.460 [2024-11-19 06:46:54.256587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-19 06:46:54.256612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-19 06:46:54.256618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.461 [2024-11-19 06:46:54.256625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:02.461 [2024-11-19 06:46:54.256631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-19 06:46:54.256645] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:02.461 [2024-11-19 06:46:54.256748] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.461 [2024-11-19 06:46:54.256760] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.461 [2024-11-19 06:46:54.256769] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:02.461 [2024-11-19 06:46:54.256781] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.461 [2024-11-19 06:46:54.256788] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.461 [2024-11-19 06:46:54.256795] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:02.461 [2024-11-19 06:46:54.256801] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.461 [2024-11-19 06:46:54.256809] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.461 [2024-11-19 06:46:54.256815] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.461 [2024-11-19 06:46:54.256822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-19 06:46:54.256828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.461 [2024-11-19 06:46:54.256835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:23:02.461 [2024-11-19 06:46:54.256846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-19 06:46:54.256909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-19 06:46:54.256916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.461 [2024-11-19 06:46:54.256932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:02.461 [2024-11-19 06:46:54.256938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-19 06:46:54.257015] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.461 [2024-11-19 06:46:54.257024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.461 [2024-11-19 06:46:54.257032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.461 [2024-11-19 06:46:54.257038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.461 [2024-11-19 06:46:54.257050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:02.461 [2024-11-19 06:46:54.257062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.461 [2024-11-19 06:46:54.257069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.461 [2024-11-19 06:46:54.257083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.461 [2024-11-19 06:46:54.257090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:02.461 [2024-11-19 06:46:54.257096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.461 [2024-11-19 06:46:54.257101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.461 [2024-11-19 06:46:54.257108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:02.461 [2024-11-19 06:46:54.257113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.461 [2024-11-19 06:46:54.257127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:02.461 [2024-11-19 06:46:54.257134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.461 [2024-11-19 06:46:54.257146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.461 [2024-11-19 06:46:54.257158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.461 [2024-11-19 06:46:54.257163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.461 [2024-11-19 06:46:54.257174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.461 [2024-11-19 06:46:54.257180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.461 [2024-11-19 06:46:54.257191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.461 [2024-11-19 06:46:54.257196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.461 [2024-11-19 06:46:54.257207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.461 [2024-11-19 06:46:54.257216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.461 [2024-11-19 06:46:54.257227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.461 [2024-11-19 06:46:54.257232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:02.461 [2024-11-19 06:46:54.257238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.461 [2024-11-19 06:46:54.257242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.461 [2024-11-19 06:46:54.257249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:02.461 [2024-11-19 06:46:54.257253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.461 [2024-11-19 06:46:54.257264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:02.461 [2024-11-19 06:46:54.257271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257276] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.461 [2024-11-19 06:46:54.257284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.461 [2024-11-19 06:46:54.257290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.461 [2024-11-19 06:46:54.257298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.461 [2024-11-19 06:46:54.257304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.461 [2024-11-19 06:46:54.257312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.461 [2024-11-19 06:46:54.257317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.461 [2024-11-19 06:46:54.257324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.461 [2024-11-19 06:46:54.257328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.461 [2024-11-19 06:46:54.257335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.461 [2024-11-19 06:46:54.257342] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.461 [2024-11-19 06:46:54.257351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.461 [2024-11-19 06:46:54.257358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:02.461 [2024-11-19 06:46:54.257365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:02.461 [2024-11-19 06:46:54.257371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:02.461 [2024-11-19 06:46:54.257377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:02.461 [2024-11-19 06:46:54.257383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:02.461 [2024-11-19 06:46:54.257390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:02.461 [2024-11-19 06:46:54.257396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:02.461 [2024-11-19 06:46:54.257402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:02.461 [2024-11-19 06:46:54.257407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:02.461 [2024-11-19 06:46:54.257416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:02.461 [2024-11-19 06:46:54.257421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:02.461 [2024-11-19 06:46:54.257428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:02.461 [2024-11-19 06:46:54.257434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:02.461 [2024-11-19 06:46:54.257441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:02.461 [2024-11-19 06:46:54.257447] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.461 [2024-11-19 06:46:54.257454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.461 [2024-11-19 06:46:54.257460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.461 [2024-11-19 06:46:54.257467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.462 [2024-11-19 06:46:54.257472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.462 [2024-11-19 06:46:54.257480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.462 [2024-11-19 06:46:54.257486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.462 [2024-11-19 06:46:54.257493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.462 [2024-11-19 06:46:54.257500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:23:02.462 [2024-11-19 06:46:54.257506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.462 [2024-11-19 06:46:54.257536] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:02.462 [2024-11-19 06:46:54.257546] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:06.665 [2024-11-19 06:46:57.979638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:57.979719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:06.665 [2024-11-19 06:46:57.979737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3722.083 ms 00:23:06.665 [2024-11-19 06:46:57.979750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.012040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.012098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.665 [2024-11-19 06:46:58.012112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.009 ms 00:23:06.665 [2024-11-19 06:46:58.012125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.012266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.012280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:06.665 [2024-11-19 06:46:58.012292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:06.665 [2024-11-19 06:46:58.012306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.048019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.048067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.665 [2024-11-19 06:46:58.048079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.654 ms 00:23:06.665 [2024-11-19 06:46:58.048089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.048123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.048140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.665 [2024-11-19 06:46:58.048149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:06.665 [2024-11-19 06:46:58.048159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.048713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.048742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.665 [2024-11-19 06:46:58.048753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:23:06.665 [2024-11-19 06:46:58.048764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.048881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.048894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.665 [2024-11-19 06:46:58.048908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:06.665 [2024-11-19 06:46:58.048946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.066662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.066707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.665 [2024-11-19 06:46:58.066718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.695 ms 00:23:06.665 [2024-11-19 06:46:58.066729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.079953] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:06.665 [2024-11-19 06:46:58.083787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.083825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:06.665 [2024-11-19 06:46:58.083839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.971 ms 00:23:06.665 [2024-11-19 06:46:58.083848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.183918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.183985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:06.665 [2024-11-19 06:46:58.184006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.034 ms 00:23:06.665 [2024-11-19 06:46:58.184016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.184231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.184248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:06.665 [2024-11-19 06:46:58.184263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:23:06.665 [2024-11-19 06:46:58.184271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.210356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.210403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:06.665 [2024-11-19 06:46:58.210419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.026 ms 00:23:06.665 [2024-11-19 06:46:58.210428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.235695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.235736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:06.665 [2024-11-19 06:46:58.235751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.204 ms 00:23:06.665 [2024-11-19 06:46:58.235759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.236403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.236419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:06.665 [2024-11-19 06:46:58.236432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:23:06.665 [2024-11-19 06:46:58.236441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.326382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.326426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:06.665 [2024-11-19 06:46:58.326446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.890 ms 00:23:06.665 [2024-11-19 06:46:58.326454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.354569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.354617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:06.665 [2024-11-19 06:46:58.354633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.014 ms 00:23:06.665 [2024-11-19 06:46:58.354641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.380777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.380819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:06.665 [2024-11-19 06:46:58.380834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.079 ms 00:23:06.665 [2024-11-19 06:46:58.380841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.407180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.407222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:06.665 [2024-11-19 06:46:58.407236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.283 ms 00:23:06.665 [2024-11-19 06:46:58.407244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.407301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.407311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:06.665 [2024-11-19 06:46:58.407327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:06.665 [2024-11-19 06:46:58.407335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.407431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.665 [2024-11-19 06:46:58.407444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:06.665 [2024-11-19 06:46:58.407458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:06.665 [2024-11-19 06:46:58.407466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.665 [2024-11-19 06:46:58.408741] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4161.103 ms, result 0 00:23:06.665 { 00:23:06.665 "name": "ftl0", 00:23:06.665 "uuid": "c9a17d6d-b7e5-4bf1-931a-08f01203310e" 00:23:06.665 } 00:23:06.665 06:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:06.665 06:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:06.926 06:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:06.926 06:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:06.926 06:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:07.187 /dev/nbd0 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:07.187 1+0 records in 00:23:07.187 1+0 records out 00:23:07.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464691 s, 8.8 MB/s 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:23:07.187 06:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:07.187 [2024-11-19 06:46:59.000057] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:07.187 [2024-11-19 06:46:59.000212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77595 ] 00:23:07.461 [2024-11-19 06:46:59.167058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.461 [2024-11-19 06:46:59.310122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.842  [2024-11-19T06:47:01.705Z] Copying: 188/1024 [MB] (188 MBps) [2024-11-19T06:47:02.639Z] Copying: 394/1024 [MB] (205 MBps) [2024-11-19T06:47:04.014Z] Copying: 650/1024 [MB] (255 MBps) [2024-11-19T06:47:04.272Z] Copying: 900/1024 [MB] (250 MBps) [2024-11-19T06:47:04.839Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:23:12.910 00:23:12.910 06:47:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:14.811 06:47:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:14.811 [2024-11-19 06:47:06.364582] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:14.811 [2024-11-19 06:47:06.364673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77671 ] 00:23:14.811 [2024-11-19 06:47:06.514037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.811 [2024-11-19 06:47:06.601469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.189  [2024-11-19T06:47:09.057Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-19T06:47:10.007Z] Copying: 44/1024 [MB] (15 MBps) [2024-11-19T06:47:10.948Z] Copying: 62/1024 [MB] (18 MBps) [2024-11-19T06:47:11.889Z] Copying: 79/1024 [MB] (17 MBps) [2024-11-19T06:47:12.832Z] Copying: 95/1024 [MB] (15 MBps) [2024-11-19T06:47:14.219Z] Copying: 114/1024 [MB] (19 MBps) [2024-11-19T06:47:14.789Z] Copying: 127/1024 [MB] (12 MBps) [2024-11-19T06:47:16.169Z] Copying: 145/1024 [MB] (18 MBps) [2024-11-19T06:47:17.110Z] Copying: 171/1024 [MB] (26 MBps) [2024-11-19T06:47:18.054Z] Copying: 198/1024 [MB] (26 MBps) [2024-11-19T06:47:18.997Z] Copying: 215/1024 [MB] (17 MBps) [2024-11-19T06:47:20.067Z] Copying: 233/1024 [MB] (17 MBps) [2024-11-19T06:47:21.010Z] Copying: 252/1024 [MB] (19 MBps) [2024-11-19T06:47:21.948Z] Copying: 272/1024 [MB] (19 MBps) [2024-11-19T06:47:22.888Z] Copying: 294/1024 [MB] (21 MBps) [2024-11-19T06:47:23.827Z] Copying: 318/1024 [MB] (24 MBps) [2024-11-19T06:47:25.210Z] Copying: 339/1024 [MB] (21 MBps) [2024-11-19T06:47:26.153Z] Copying: 357/1024 [MB] (17 MBps) [2024-11-19T06:47:27.089Z] Copying: 373/1024 [MB] (15 MBps) [2024-11-19T06:47:28.033Z] Copying: 391/1024 [MB] (18 MBps) [2024-11-19T06:47:28.976Z] Copying: 405/1024 [MB] (13 MBps) [2024-11-19T06:47:29.919Z] Copying: 420/1024 [MB] (15 MBps) [2024-11-19T06:47:30.864Z] Copying: 435/1024 [MB] (15 MBps) [2024-11-19T06:47:31.809Z] Copying: 448/1024 [MB] (13 MBps) [2024-11-19T06:47:33.195Z] Copying: 462/1024 [MB] (13 MBps) [2024-11-19T06:47:34.139Z] Copying: 476/1024 [MB] (14 MBps) [2024-11-19T06:47:35.083Z] Copying: 497/1024 [MB] (20 MBps) [2024-11-19T06:47:36.026Z] Copying: 511/1024 [MB] (13 MBps) [2024-11-19T06:47:36.970Z] Copying: 526/1024 [MB] (14 MBps) [2024-11-19T06:47:37.919Z] Copying: 540/1024 [MB] (14 MBps) [2024-11-19T06:47:38.855Z] Copying: 572/1024 [MB] (31 MBps) [2024-11-19T06:47:39.797Z] Copying: 604/1024 [MB] (32 MBps) [2024-11-19T06:47:41.182Z] Copying: 620/1024 [MB] (15 MBps) [2024-11-19T06:47:42.120Z] Copying: 635/1024 [MB] (15 MBps) [2024-11-19T06:47:43.058Z] Copying: 654/1024 [MB] (19 MBps) [2024-11-19T06:47:44.001Z] Copying: 682/1024 [MB] (27 MBps) [2024-11-19T06:47:44.990Z] Copying: 700/1024 [MB] (18 MBps) [2024-11-19T06:47:45.933Z] Copying: 719/1024 [MB] (18 MBps) [2024-11-19T06:47:46.872Z] Copying: 738/1024 [MB] (19 MBps) [2024-11-19T06:47:47.813Z] Copying: 761/1024 [MB] (22 MBps) [2024-11-19T06:47:49.196Z] Copying: 778/1024 [MB] (17 MBps) [2024-11-19T06:47:50.134Z] Copying: 806/1024 [MB] (27 MBps) [2024-11-19T06:47:51.071Z] Copying: 834/1024 [MB] (28 MBps) [2024-11-19T06:47:52.004Z] Copying: 854/1024 [MB] (19 MBps) [2024-11-19T06:47:52.936Z] Copying: 888/1024 [MB] (34 MBps) [2024-11-19T06:47:53.884Z] Copying: 922/1024 [MB] (34 MBps) [2024-11-19T06:47:54.827Z] Copying: 955/1024 [MB] (33 MBps) [2024-11-19T06:47:56.211Z] Copying: 976/1024 [MB] (20 MBps) [2024-11-19T06:47:56.778Z] Copying: 1002/1024 [MB] (25 MBps) [2024-11-19T06:47:57.349Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:24:05.420 00:24:05.420 06:47:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:05.420 06:47:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:05.678 06:47:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:05.678 [2024-11-19 06:47:57.518456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.678 [2024-11-19 06:47:57.518492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:05.678 [2024-11-19 06:47:57.518502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:05.678 [2024-11-19 06:47:57.518510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.678 [2024-11-19 06:47:57.518527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:05.678 [2024-11-19 06:47:57.520592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.678 [2024-11-19 06:47:57.520706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:05.678 [2024-11-19 06:47:57.520722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:24:05.678 [2024-11-19 06:47:57.520729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.678 [2024-11-19 06:47:57.522847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.678 [2024-11-19 06:47:57.522880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:05.678 [2024-11-19 06:47:57.522891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:24:05.678 [2024-11-19 06:47:57.522898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.678 [2024-11-19 06:47:57.537768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.678 [2024-11-19 06:47:57.537797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:05.678 [2024-11-19 06:47:57.537807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.852 ms 00:24:05.678 [2024-11-19 06:47:57.537814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.678 [2024-11-19 06:47:57.542647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.678 [2024-11-19 06:47:57.542669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:05.678 [2024-11-19 06:47:57.542679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:24:05.678 [2024-11-19 06:47:57.542686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.678 [2024-11-19 06:47:57.561600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.678 [2024-11-19 06:47:57.561626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:05.678 [2024-11-19 06:47:57.561636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.861 ms 00:24:05.678 [2024-11-19 06:47:57.561642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.678 [2024-11-19 06:47:57.574354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.678 [2024-11-19 06:47:57.574380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:05.678 [2024-11-19 06:47:57.574391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.684 ms 00:24:05.678 [2024-11-19 06:47:57.574406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.678 [2024-11-19 06:47:57.574507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.678 [2024-11-19 06:47:57.574514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:05.678 [2024-11-19 06:47:57.574523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:05.678 [2024-11-19 06:47:57.574528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.678 [2024-11-19 06:47:57.592760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.678 [2024-11-19 06:47:57.592783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:05.678 [2024-11-19 06:47:57.592792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.217 ms 00:24:05.678 [2024-11-19 06:47:57.592798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.940 [2024-11-19 06:47:57.610526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.940 [2024-11-19 06:47:57.610549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:05.940 [2024-11-19 06:47:57.610558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.701 ms 00:24:05.940 [2024-11-19 06:47:57.610564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.940 [2024-11-19 06:47:57.628166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.940 [2024-11-19 06:47:57.628276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:05.940 [2024-11-19 06:47:57.628292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.573 ms 00:24:05.940 [2024-11-19 06:47:57.628297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.940 [2024-11-19 06:47:57.646148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.940 [2024-11-19 06:47:57.646172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:05.940 [2024-11-19 06:47:57.646181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.772 ms 00:24:05.940 [2024-11-19 06:47:57.646187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.940 [2024-11-19 06:47:57.646214] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:05.940 [2024-11-19 06:47:57.646224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:05.941 [2024-11-19 06:47:57.646799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:05.942 [2024-11-19 06:47:57.646889] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:05.942 [2024-11-19 06:47:57.646896] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9a17d6d-b7e5-4bf1-931a-08f01203310e 00:24:05.942 [2024-11-19 06:47:57.646902] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:05.942 [2024-11-19 06:47:57.646910] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:05.942 [2024-11-19 06:47:57.646915] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:05.942 [2024-11-19 06:47:57.646935] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:05.942 [2024-11-19 06:47:57.646941] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:05.942 [2024-11-19 06:47:57.646948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:05.942 [2024-11-19 06:47:57.646954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:05.942 [2024-11-19 06:47:57.646959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:05.942 [2024-11-19 06:47:57.646964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:05.942 [2024-11-19 06:47:57.646971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.942 [2024-11-19 06:47:57.646977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:05.942 [2024-11-19 06:47:57.646985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:24:05.942 [2024-11-19 06:47:57.646990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.656524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.942 [2024-11-19 06:47:57.656547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:05.942 [2024-11-19 06:47:57.656558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.510 ms 00:24:05.942 [2024-11-19 06:47:57.656564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.656828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.942 [2024-11-19 06:47:57.656836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:05.942 [2024-11-19 06:47:57.656843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:24:05.942 [2024-11-19 06:47:57.656849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.689442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.689468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.942 [2024-11-19 06:47:57.689477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.689483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.689522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.689529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.942 [2024-11-19 06:47:57.689536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.689542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.689595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.689603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.942 [2024-11-19 06:47:57.689612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.689618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.689634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.689640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.942 [2024-11-19 06:47:57.689647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.689653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.748000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.748034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.942 [2024-11-19 06:47:57.748044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.748050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.796738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.796768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.942 [2024-11-19 06:47:57.796778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.796784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.796864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.796871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.942 [2024-11-19 06:47:57.796879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.796886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.796942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.796951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.942 [2024-11-19 06:47:57.796958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.796964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.797036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.797044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.942 [2024-11-19 06:47:57.797051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.797058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.797107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.797114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:05.942 [2024-11-19 06:47:57.797122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.797128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.797157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.797164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.942 [2024-11-19 06:47:57.797172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.797177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.797216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.942 [2024-11-19 06:47:57.797224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.942 [2024-11-19 06:47:57.797231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.942 [2024-11-19 06:47:57.797241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.942 [2024-11-19 06:47:57.797345] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.857 ms, result 0 00:24:05.942 true 00:24:05.942 06:47:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77442 00:24:05.942 06:47:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77442 00:24:05.942 06:47:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:06.202 [2024-11-19 06:47:57.890409] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:06.202 [2024-11-19 06:47:57.890524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78206 ] 00:24:06.202 [2024-11-19 06:47:58.046368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.202 [2024-11-19 06:47:58.121253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.583  [2024-11-19T06:48:00.445Z] Copying: 257/1024 [MB] (257 MBps) [2024-11-19T06:48:01.379Z] Copying: 517/1024 [MB] (259 MBps) [2024-11-19T06:48:02.314Z] Copying: 773/1024 [MB] (255 MBps) [2024-11-19T06:48:02.314Z] Copying: 1020/1024 [MB] (247 MBps) [2024-11-19T06:48:02.883Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:24:10.954 00:24:10.954 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77442 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:10.954 06:48:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:11.213 [2024-11-19 06:48:02.942614] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:11.213 [2024-11-19 06:48:02.942915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78260 ] 00:24:11.213 [2024-11-19 06:48:03.093314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.471 [2024-11-19 06:48:03.169842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.471 [2024-11-19 06:48:03.374841] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.471 [2024-11-19 06:48:03.375035] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.729 [2024-11-19 06:48:03.437416] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:11.729 [2024-11-19 06:48:03.437737] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:11.729 [2024-11-19 06:48:03.437956] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:11.729 [2024-11-19 06:48:03.659408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.729 [2024-11-19 06:48:03.659522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:11.729 [2024-11-19 06:48:03.659586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:11.729 [2024-11-19 06:48:03.659604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.729 [2024-11-19 06:48:03.659657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.659676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.989 [2024-11-19 06:48:03.659684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:11.989 [2024-11-19 06:48:03.659691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.659707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:11.989 [2024-11-19 06:48:03.660266] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:11.989 [2024-11-19 06:48:03.660279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.660285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.989 [2024-11-19 06:48:03.660292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:24:11.989 [2024-11-19 06:48:03.660297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.661286] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:11.989 [2024-11-19 06:48:03.670853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.670954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:11.989 [2024-11-19 06:48:03.670996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.568 ms 00:24:11.989 [2024-11-19 06:48:03.671012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.671075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.671202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:11.989 [2024-11-19 06:48:03.671220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:11.989 [2024-11-19 06:48:03.671234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.675482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.675567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.989 [2024-11-19 06:48:03.675635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:24:11.989 [2024-11-19 06:48:03.675653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.675714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.675732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.989 [2024-11-19 06:48:03.675774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:11.989 [2024-11-19 06:48:03.675791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.675833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.675854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:11.989 [2024-11-19 06:48:03.675870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:11.989 [2024-11-19 06:48:03.675930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.675956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:11.989 [2024-11-19 06:48:03.678496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.678574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.989 [2024-11-19 06:48:03.678614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:24:11.989 [2024-11-19 06:48:03.678630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.678665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.678749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:11.989 [2024-11-19 06:48:03.678767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:11.989 [2024-11-19 06:48:03.678782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.678805] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:11.989 [2024-11-19 06:48:03.678833] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:11.989 [2024-11-19 06:48:03.678904] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:11.989 [2024-11-19 06:48:03.678947] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:11.989 [2024-11-19 06:48:03.679077] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:11.989 [2024-11-19 06:48:03.679105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:11.989 [2024-11-19 06:48:03.679165] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:11.989 [2024-11-19 06:48:03.679212] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679238] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679261] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:11.989 [2024-11-19 06:48:03.679275] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:11.989 [2024-11-19 06:48:03.679314] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:11.989 [2024-11-19 06:48:03.679332] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:11.989 [2024-11-19 06:48:03.679346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.679361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:11.989 [2024-11-19 06:48:03.679376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:24:11.989 [2024-11-19 06:48:03.679391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.679486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.989 [2024-11-19 06:48:03.679526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:11.989 [2024-11-19 06:48:03.679583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:11.989 [2024-11-19 06:48:03.679600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.989 [2024-11-19 06:48:03.679707] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:11.989 [2024-11-19 06:48:03.679718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:11.989 [2024-11-19 06:48:03.679725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:11.989 [2024-11-19 06:48:03.679744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:11.989 [2024-11-19 06:48:03.679760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.989 [2024-11-19 06:48:03.679771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:11.989 [2024-11-19 06:48:03.679780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:11.989 [2024-11-19 06:48:03.679785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.989 [2024-11-19 06:48:03.679790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:11.989 [2024-11-19 06:48:03.679795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:11.989 [2024-11-19 06:48:03.679801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:11.989 [2024-11-19 06:48:03.679811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:11.989 [2024-11-19 06:48:03.679826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:11.989 [2024-11-19 06:48:03.679842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:11.989 [2024-11-19 06:48:03.679856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:11.989 [2024-11-19 06:48:03.679871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:11.989 [2024-11-19 06:48:03.679887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.989 [2024-11-19 06:48:03.679896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:11.989 [2024-11-19 06:48:03.679901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:11.989 [2024-11-19 06:48:03.679906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.989 [2024-11-19 06:48:03.679912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:11.989 [2024-11-19 06:48:03.679916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:11.989 [2024-11-19 06:48:03.679921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:11.989 [2024-11-19 06:48:03.679941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:11.989 [2024-11-19 06:48:03.679946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679951] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:11.989 [2024-11-19 06:48:03.679957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:11.989 [2024-11-19 06:48:03.679963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.989 [2024-11-19 06:48:03.679970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.989 [2024-11-19 06:48:03.679976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:11.989 [2024-11-19 06:48:03.679981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:11.989 [2024-11-19 06:48:03.679986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:11.989 [2024-11-19 06:48:03.679991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:11.989 [2024-11-19 06:48:03.679995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:11.989 [2024-11-19 06:48:03.680000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:11.990 [2024-11-19 06:48:03.680006] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:11.990 [2024-11-19 06:48:03.680014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.990 [2024-11-19 06:48:03.680020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:11.990 [2024-11-19 06:48:03.680026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:11.990 [2024-11-19 06:48:03.680031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:11.990 [2024-11-19 06:48:03.680036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:11.990 [2024-11-19 06:48:03.680042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:11.990 [2024-11-19 06:48:03.680047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:11.990 [2024-11-19 06:48:03.680053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:11.990 [2024-11-19 06:48:03.680058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:11.990 [2024-11-19 06:48:03.680063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:11.990 [2024-11-19 06:48:03.680069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:11.990 [2024-11-19 06:48:03.680074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:11.990 [2024-11-19 06:48:03.680079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:11.990 [2024-11-19 06:48:03.680085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:11.990 [2024-11-19 06:48:03.680090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:11.990 [2024-11-19 06:48:03.680095] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:11.990 [2024-11-19 06:48:03.680102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.990 [2024-11-19 06:48:03.680108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:11.990 [2024-11-19 06:48:03.680113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:11.990 [2024-11-19 06:48:03.680119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:11.990 [2024-11-19 06:48:03.680124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:11.990 [2024-11-19 06:48:03.680129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.680135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:11.990 [2024-11-19 06:48:03.680140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:24:11.990 [2024-11-19 06:48:03.680146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.700760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.700857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.990 [2024-11-19 06:48:03.700869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.572 ms 00:24:11.990 [2024-11-19 06:48:03.700876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.700949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.700959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:11.990 [2024-11-19 06:48:03.700965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:11.990 [2024-11-19 06:48:03.700972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.740760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.740790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.990 [2024-11-19 06:48:03.740800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.750 ms 00:24:11.990 [2024-11-19 06:48:03.740808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.740837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.740844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.990 [2024-11-19 06:48:03.740851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:11.990 [2024-11-19 06:48:03.740857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.741189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.741203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.990 [2024-11-19 06:48:03.741210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:24:11.990 [2024-11-19 06:48:03.741217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.741315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.741323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.990 [2024-11-19 06:48:03.741329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:11.990 [2024-11-19 06:48:03.741335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.751635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.751659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.990 [2024-11-19 06:48:03.751666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.283 ms 00:24:11.990 [2024-11-19 06:48:03.751673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.761798] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:11.990 [2024-11-19 06:48:03.761824] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:11.990 [2024-11-19 06:48:03.761833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.761839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:11.990 [2024-11-19 06:48:03.761846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.080 ms 00:24:11.990 [2024-11-19 06:48:03.761852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.780293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.780319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:11.990 [2024-11-19 06:48:03.780333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.412 ms 00:24:11.990 [2024-11-19 06:48:03.780339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.789348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.789371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:11.990 [2024-11-19 06:48:03.789379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.981 ms 00:24:11.990 [2024-11-19 06:48:03.789385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.798469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.798493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:11.990 [2024-11-19 06:48:03.798500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.060 ms 00:24:11.990 [2024-11-19 06:48:03.798506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.798970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.798982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:11.990 [2024-11-19 06:48:03.798989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:24:11.990 [2024-11-19 06:48:03.798995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.843091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.843124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:11.990 [2024-11-19 06:48:03.843134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.083 ms 00:24:11.990 [2024-11-19 06:48:03.843141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.850962] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:11.990 [2024-11-19 06:48:03.852628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.852650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:11.990 [2024-11-19 06:48:03.852658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.456 ms 00:24:11.990 [2024-11-19 06:48:03.852664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.852714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.852723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:11.990 [2024-11-19 06:48:03.852730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:11.990 [2024-11-19 06:48:03.852736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.852786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.852794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:11.990 [2024-11-19 06:48:03.852800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:11.990 [2024-11-19 06:48:03.852807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.852821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.852829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:11.990 [2024-11-19 06:48:03.852835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:11.990 [2024-11-19 06:48:03.852841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.852864] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:11.990 [2024-11-19 06:48:03.852871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.852877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:11.990 [2024-11-19 06:48:03.852883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:11.990 [2024-11-19 06:48:03.852888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.870708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.870845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:11.990 [2024-11-19 06:48:03.870858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.805 ms 00:24:11.990 [2024-11-19 06:48:03.870864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.870917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.990 [2024-11-19 06:48:03.870936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:11.990 [2024-11-19 06:48:03.870943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:11.990 [2024-11-19 06:48:03.870949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.990 [2024-11-19 06:48:03.871652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.915 ms, result 0 00:24:13.375  [2024-11-19T06:48:06.237Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-19T06:48:07.174Z] Copying: 35/1024 [MB] (20 MBps) [2024-11-19T06:48:08.113Z] Copying: 56/1024 [MB] (21 MBps) [2024-11-19T06:48:09.045Z] Copying: 77/1024 [MB] (20 MBps) [2024-11-19T06:48:09.990Z] Copying: 111/1024 [MB] (34 MBps) [2024-11-19T06:48:10.923Z] Copying: 133/1024 [MB] (22 MBps) [2024-11-19T06:48:12.295Z] Copying: 168/1024 [MB] (34 MBps) [2024-11-19T06:48:13.228Z] Copying: 192/1024 [MB] (23 MBps) [2024-11-19T06:48:14.171Z] Copying: 222/1024 [MB] (30 MBps) [2024-11-19T06:48:15.115Z] Copying: 238/1024 [MB] (16 MBps) [2024-11-19T06:48:16.051Z] Copying: 250/1024 [MB] (12 MBps) [2024-11-19T06:48:16.984Z] Copying: 265/1024 [MB] (14 MBps) [2024-11-19T06:48:17.941Z] Copying: 288/1024 [MB] (23 MBps) [2024-11-19T06:48:18.962Z] Copying: 309/1024 [MB] (21 MBps) [2024-11-19T06:48:19.895Z] Copying: 331/1024 [MB] (21 MBps) [2024-11-19T06:48:21.266Z] Copying: 351/1024 [MB] (20 MBps) [2024-11-19T06:48:22.200Z] Copying: 371/1024 [MB] (20 MBps) [2024-11-19T06:48:23.136Z] Copying: 391/1024 [MB] (20 MBps) [2024-11-19T06:48:24.074Z] Copying: 413/1024 [MB] (21 MBps) [2024-11-19T06:48:25.018Z] Copying: 430/1024 [MB] (17 MBps) [2024-11-19T06:48:25.963Z] Copying: 445/1024 [MB] (15 MBps) [2024-11-19T06:48:26.909Z] Copying: 458/1024 [MB] (12 MBps) [2024-11-19T06:48:28.287Z] Copying: 470/1024 [MB] (12 MBps) [2024-11-19T06:48:29.230Z] Copying: 498/1024 [MB] (27 MBps) [2024-11-19T06:48:30.167Z] Copying: 520/1024 [MB] (22 MBps) [2024-11-19T06:48:31.100Z] Copying: 538/1024 [MB] (17 MBps) [2024-11-19T06:48:32.044Z] Copying: 570/1024 [MB] (32 MBps) [2024-11-19T06:48:32.980Z] Copying: 599/1024 [MB] (28 MBps) [2024-11-19T06:48:33.912Z] Copying: 625/1024 [MB] (26 MBps) [2024-11-19T06:48:35.286Z] Copying: 657/1024 [MB] (31 MBps) [2024-11-19T06:48:36.231Z] Copying: 688/1024 [MB] (31 MBps) [2024-11-19T06:48:37.165Z] Copying: 727/1024 [MB] (38 MBps) [2024-11-19T06:48:38.101Z] Copying: 746/1024 [MB] (18 MBps) [2024-11-19T06:48:39.042Z] Copying: 774/1024 [MB] (28 MBps) [2024-11-19T06:48:39.982Z] Copying: 792/1024 [MB] (17 MBps) [2024-11-19T06:48:40.921Z] Copying: 808/1024 [MB] (16 MBps) [2024-11-19T06:48:42.299Z] Copying: 844/1024 [MB] (36 MBps) [2024-11-19T06:48:43.232Z] Copying: 884/1024 [MB] (39 MBps) [2024-11-19T06:48:44.165Z] Copying: 914/1024 [MB] (30 MBps) [2024-11-19T06:48:45.101Z] Copying: 964/1024 [MB] (49 MBps) [2024-11-19T06:48:46.045Z] Copying: 993/1024 [MB] (29 MBps) [2024-11-19T06:48:46.984Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-19T06:48:47.553Z] Copying: 1023/1024 [MB] (18 MBps) [2024-11-19T06:48:47.553Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-19 06:48:47.514835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.624 [2024-11-19 06:48:47.514888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:55.624 [2024-11-19 06:48:47.514902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:55.624 [2024-11-19 06:48:47.514910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.624 [2024-11-19 06:48:47.514949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:55.624 [2024-11-19 06:48:47.520775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.624 [2024-11-19 06:48:47.520967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:55.624 [2024-11-19 06:48:47.520986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.809 ms 00:24:55.624 [2024-11-19 06:48:47.520994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.624 [2024-11-19 06:48:47.531207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.624 [2024-11-19 06:48:47.531252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:55.624 [2024-11-19 06:48:47.531265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.158 ms 00:24:55.624 [2024-11-19 06:48:47.531273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.624 [2024-11-19 06:48:47.553672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.624 [2024-11-19 06:48:47.553722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:55.624 [2024-11-19 06:48:47.553735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.382 ms 00:24:55.624 [2024-11-19 06:48:47.553744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.885 [2024-11-19 06:48:47.559879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.885 [2024-11-19 06:48:47.560039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:55.885 [2024-11-19 06:48:47.560057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.104 ms 00:24:55.885 [2024-11-19 06:48:47.560066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.885 [2024-11-19 06:48:47.584435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.885 [2024-11-19 06:48:47.584468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:55.885 [2024-11-19 06:48:47.584479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.328 ms 00:24:55.885 [2024-11-19 06:48:47.584486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.885 [2024-11-19 06:48:47.598670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.885 [2024-11-19 06:48:47.598702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:55.885 [2024-11-19 06:48:47.598713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.151 ms 00:24:55.885 [2024-11-19 06:48:47.598721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.193 [2024-11-19 06:48:47.887655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.193 [2024-11-19 06:48:47.887724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:56.193 [2024-11-19 06:48:47.887739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 288.893 ms 00:24:56.193 [2024-11-19 06:48:47.887756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.193 [2024-11-19 06:48:47.913536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.193 [2024-11-19 06:48:47.913583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:56.193 [2024-11-19 06:48:47.913595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.764 ms 00:24:56.193 [2024-11-19 06:48:47.913604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.193 [2024-11-19 06:48:47.939259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.193 [2024-11-19 06:48:47.939304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:56.193 [2024-11-19 06:48:47.939317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.609 ms 00:24:56.193 [2024-11-19 06:48:47.939324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.193 [2024-11-19 06:48:47.964339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.193 [2024-11-19 06:48:47.964385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:56.193 [2024-11-19 06:48:47.964397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.970 ms 00:24:56.193 [2024-11-19 06:48:47.964405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.193 [2024-11-19 06:48:47.989244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.193 [2024-11-19 06:48:47.989288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:56.193 [2024-11-19 06:48:47.989300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.736 ms 00:24:56.193 [2024-11-19 06:48:47.989308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.193 [2024-11-19 06:48:47.989353] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:56.193 [2024-11-19 06:48:47.989367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103680 / 261120 wr_cnt: 1 state: open 00:24:56.193 [2024-11-19 06:48:47.989379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:56.193 [2024-11-19 06:48:47.989950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.989958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.989967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.989977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.989986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.989993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:56.194 [2024-11-19 06:48:47.990452] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:56.194 [2024-11-19 06:48:47.990460] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9a17d6d-b7e5-4bf1-931a-08f01203310e 00:24:56.194 [2024-11-19 06:48:47.990470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103680 00:24:56.194 [2024-11-19 06:48:47.990484] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104640 00:24:56.194 [2024-11-19 06:48:47.990499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103680 00:24:56.194 [2024-11-19 06:48:47.990508] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:24:56.194 [2024-11-19 06:48:47.990516] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:56.194 [2024-11-19 06:48:47.990525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:56.194 [2024-11-19 06:48:47.990533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:56.194 [2024-11-19 06:48:47.990541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:56.194 [2024-11-19 06:48:47.990549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:56.194 [2024-11-19 06:48:47.990557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.194 [2024-11-19 06:48:47.990565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:56.194 [2024-11-19 06:48:47.990574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:24:56.194 [2024-11-19 06:48:47.990582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.194 [2024-11-19 06:48:48.004011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.194 [2024-11-19 06:48:48.004226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:56.194 [2024-11-19 06:48:48.004246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.410 ms 00:24:56.194 [2024-11-19 06:48:48.004255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.194 [2024-11-19 06:48:48.004666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.194 [2024-11-19 06:48:48.004689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:56.194 [2024-11-19 06:48:48.004700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:24:56.194 [2024-11-19 06:48:48.004709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.194 [2024-11-19 06:48:48.040861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.194 [2024-11-19 06:48:48.040911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:56.194 [2024-11-19 06:48:48.040942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.194 [2024-11-19 06:48:48.040952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.194 [2024-11-19 06:48:48.041022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.194 [2024-11-19 06:48:48.041034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:56.194 [2024-11-19 06:48:48.041043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.194 [2024-11-19 06:48:48.041051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.194 [2024-11-19 06:48:48.041127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.194 [2024-11-19 06:48:48.041140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:56.194 [2024-11-19 06:48:48.041149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.194 [2024-11-19 06:48:48.041158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.194 [2024-11-19 06:48:48.041174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.194 [2024-11-19 06:48:48.041184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:56.194 [2024-11-19 06:48:48.041193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.194 [2024-11-19 06:48:48.041201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.473 [2024-11-19 06:48:48.125965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.473 [2024-11-19 06:48:48.126020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.473 [2024-11-19 06:48:48.126034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.473 [2024-11-19 06:48:48.126043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.473 [2024-11-19 06:48:48.193893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.473 [2024-11-19 06:48:48.193976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:56.473 [2024-11-19 06:48:48.193989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.473 [2024-11-19 06:48:48.193998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.473 [2024-11-19 06:48:48.194095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.473 [2024-11-19 06:48:48.194105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:56.473 [2024-11-19 06:48:48.194115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.473 [2024-11-19 06:48:48.194123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.473 [2024-11-19 06:48:48.194162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.473 [2024-11-19 06:48:48.194174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:56.473 [2024-11-19 06:48:48.194183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.473 [2024-11-19 06:48:48.194192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.473 [2024-11-19 06:48:48.194288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.473 [2024-11-19 06:48:48.194303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:56.473 [2024-11-19 06:48:48.194313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.473 [2024-11-19 06:48:48.194322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.473 [2024-11-19 06:48:48.194355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.473 [2024-11-19 06:48:48.194366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:56.473 [2024-11-19 06:48:48.194374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.473 [2024-11-19 06:48:48.194383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.473 [2024-11-19 06:48:48.194427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.473 [2024-11-19 06:48:48.194440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:56.473 [2024-11-19 06:48:48.194448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.473 [2024-11-19 06:48:48.194457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.473 [2024-11-19 06:48:48.194507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.473 [2024-11-19 06:48:48.194520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:56.473 [2024-11-19 06:48:48.194529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.473 [2024-11-19 06:48:48.194537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.473 [2024-11-19 06:48:48.194675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 679.798 ms, result 0 00:24:57.864 00:24:57.864 00:24:57.864 06:48:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:59.240 06:48:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:59.501 [2024-11-19 06:48:51.233742] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:59.501 [2024-11-19 06:48:51.234008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78762 ] 00:24:59.501 [2024-11-19 06:48:51.397208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.763 [2024-11-19 06:48:51.501027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.024 [2024-11-19 06:48:51.790125] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:00.024 [2024-11-19 06:48:51.790199] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:00.024 [2024-11-19 06:48:51.951481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.024 [2024-11-19 06:48:51.951552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:00.024 [2024-11-19 06:48:51.951575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:00.024 [2024-11-19 06:48:51.951583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.024 [2024-11-19 06:48:51.951637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.024 [2024-11-19 06:48:51.951650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.024 [2024-11-19 06:48:51.951662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:00.024 [2024-11-19 06:48:51.951670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.025 [2024-11-19 06:48:51.951691] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:00.025 [2024-11-19 06:48:51.952469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:00.025 [2024-11-19 06:48:51.952540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.025 [2024-11-19 06:48:51.952551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.025 [2024-11-19 06:48:51.952561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:25:00.025 [2024-11-19 06:48:51.952570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.025 [2024-11-19 06:48:51.954203] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:00.287 [2024-11-19 06:48:51.968678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.287 [2024-11-19 06:48:51.968728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:00.287 [2024-11-19 06:48:51.968743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.476 ms 00:25:00.287 [2024-11-19 06:48:51.968752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.287 [2024-11-19 06:48:51.968830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.287 [2024-11-19 06:48:51.968840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:00.287 [2024-11-19 06:48:51.968849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:00.287 [2024-11-19 06:48:51.968857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.287 [2024-11-19 06:48:51.976759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.287 [2024-11-19 06:48:51.976804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.287 [2024-11-19 06:48:51.976815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.800 ms 00:25:00.287 [2024-11-19 06:48:51.976824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.287 [2024-11-19 06:48:51.976910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.287 [2024-11-19 06:48:51.976920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.287 [2024-11-19 06:48:51.976955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:00.287 [2024-11-19 06:48:51.976964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.287 [2024-11-19 06:48:51.977009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.287 [2024-11-19 06:48:51.977020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:00.287 [2024-11-19 06:48:51.977029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:00.287 [2024-11-19 06:48:51.977038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.288 [2024-11-19 06:48:51.977061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:00.288 [2024-11-19 06:48:51.981128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.288 [2024-11-19 06:48:51.981167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.288 [2024-11-19 06:48:51.981177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.072 ms 00:25:00.288 [2024-11-19 06:48:51.981189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.288 [2024-11-19 06:48:51.981224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.288 [2024-11-19 06:48:51.981233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:00.288 [2024-11-19 06:48:51.981242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:00.288 [2024-11-19 06:48:51.981249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.288 [2024-11-19 06:48:51.981300] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:00.288 [2024-11-19 06:48:51.981322] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:00.288 [2024-11-19 06:48:51.981360] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:00.288 [2024-11-19 06:48:51.981380] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:00.288 [2024-11-19 06:48:51.981487] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:00.288 [2024-11-19 06:48:51.981500] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:00.288 [2024-11-19 06:48:51.981511] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:00.288 [2024-11-19 06:48:51.981525] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:00.288 [2024-11-19 06:48:51.981534] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:00.288 [2024-11-19 06:48:51.981542] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:00.288 [2024-11-19 06:48:51.981550] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:00.288 [2024-11-19 06:48:51.981558] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:00.288 [2024-11-19 06:48:51.981567] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:00.288 [2024-11-19 06:48:51.981580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.288 [2024-11-19 06:48:51.981589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:00.288 [2024-11-19 06:48:51.981597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:25:00.288 [2024-11-19 06:48:51.981606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.288 [2024-11-19 06:48:51.981690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.288 [2024-11-19 06:48:51.981700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:00.288 [2024-11-19 06:48:51.981710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:00.288 [2024-11-19 06:48:51.981717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.288 [2024-11-19 06:48:51.981821] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:00.288 [2024-11-19 06:48:51.981837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:00.288 [2024-11-19 06:48:51.981848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.288 [2024-11-19 06:48:51.981856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.288 [2024-11-19 06:48:51.981865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:00.288 [2024-11-19 06:48:51.981872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:00.288 [2024-11-19 06:48:51.981880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:00.288 [2024-11-19 06:48:51.981891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:00.288 [2024-11-19 06:48:51.981900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:00.288 [2024-11-19 06:48:51.981907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.288 [2024-11-19 06:48:51.981915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:00.288 [2024-11-19 06:48:51.981950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:00.288 [2024-11-19 06:48:51.981959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.288 [2024-11-19 06:48:51.981969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:00.288 [2024-11-19 06:48:51.981979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:00.288 [2024-11-19 06:48:51.981993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:00.288 [2024-11-19 06:48:51.982009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:00.288 [2024-11-19 06:48:51.982015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:00.288 [2024-11-19 06:48:51.982030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.288 [2024-11-19 06:48:51.982044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:00.288 [2024-11-19 06:48:51.982051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.288 [2024-11-19 06:48:51.982065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:00.288 [2024-11-19 06:48:51.982072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.288 [2024-11-19 06:48:51.982086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:00.288 [2024-11-19 06:48:51.982093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.288 [2024-11-19 06:48:51.982106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:00.288 [2024-11-19 06:48:51.982113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.288 [2024-11-19 06:48:51.982127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:00.288 [2024-11-19 06:48:51.982133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:00.288 [2024-11-19 06:48:51.982140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.288 [2024-11-19 06:48:51.982147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:00.288 [2024-11-19 06:48:51.982153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:00.288 [2024-11-19 06:48:51.982160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:00.288 [2024-11-19 06:48:51.982175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:00.288 [2024-11-19 06:48:51.982182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982189] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:00.288 [2024-11-19 06:48:51.982197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:00.288 [2024-11-19 06:48:51.982206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.288 [2024-11-19 06:48:51.982214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.288 [2024-11-19 06:48:51.982222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:00.288 [2024-11-19 06:48:51.982235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:00.288 [2024-11-19 06:48:51.982242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:00.288 [2024-11-19 06:48:51.982250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:00.288 [2024-11-19 06:48:51.982257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:00.288 [2024-11-19 06:48:51.982264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:00.288 [2024-11-19 06:48:51.982272] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:00.288 [2024-11-19 06:48:51.982298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.288 [2024-11-19 06:48:51.982309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:00.288 [2024-11-19 06:48:51.982317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:00.288 [2024-11-19 06:48:51.982325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:00.288 [2024-11-19 06:48:51.982334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:00.288 [2024-11-19 06:48:51.982342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:00.288 [2024-11-19 06:48:51.982350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:00.288 [2024-11-19 06:48:51.982357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:00.288 [2024-11-19 06:48:51.982364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:00.288 [2024-11-19 06:48:51.982373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:00.288 [2024-11-19 06:48:51.982381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:00.288 [2024-11-19 06:48:51.982388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:00.288 [2024-11-19 06:48:51.982395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:00.288 [2024-11-19 06:48:51.982405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:00.289 [2024-11-19 06:48:51.982412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:00.289 [2024-11-19 06:48:51.982421] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:00.289 [2024-11-19 06:48:51.982432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.289 [2024-11-19 06:48:51.982440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:00.289 [2024-11-19 06:48:51.982447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:00.289 [2024-11-19 06:48:51.982455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:00.289 [2024-11-19 06:48:51.982463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:00.289 [2024-11-19 06:48:51.982471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:51.982479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:00.289 [2024-11-19 06:48:51.982487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:25:00.289 [2024-11-19 06:48:51.982494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.014114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.014164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.289 [2024-11-19 06:48:52.014177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.575 ms 00:25:00.289 [2024-11-19 06:48:52.014186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.014284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.014294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:00.289 [2024-11-19 06:48:52.014302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:00.289 [2024-11-19 06:48:52.014311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.059917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.059982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.289 [2024-11-19 06:48:52.059996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.549 ms 00:25:00.289 [2024-11-19 06:48:52.060005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.060053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.060064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.289 [2024-11-19 06:48:52.060074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:00.289 [2024-11-19 06:48:52.060085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.060627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.060654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.289 [2024-11-19 06:48:52.060664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:25:00.289 [2024-11-19 06:48:52.060673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.060819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.060831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.289 [2024-11-19 06:48:52.060841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:25:00.289 [2024-11-19 06:48:52.060855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.076361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.076616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.289 [2024-11-19 06:48:52.076642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.482 ms 00:25:00.289 [2024-11-19 06:48:52.076651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.090981] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:00.289 [2024-11-19 06:48:52.091165] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:00.289 [2024-11-19 06:48:52.091185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.091193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:00.289 [2024-11-19 06:48:52.091203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.423 ms 00:25:00.289 [2024-11-19 06:48:52.091212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.117080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.117146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:00.289 [2024-11-19 06:48:52.117159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.823 ms 00:25:00.289 [2024-11-19 06:48:52.117167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.129951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.130142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:00.289 [2024-11-19 06:48:52.130162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.730 ms 00:25:00.289 [2024-11-19 06:48:52.130171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.142938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.142982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:00.289 [2024-11-19 06:48:52.142994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.729 ms 00:25:00.289 [2024-11-19 06:48:52.143002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.143665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.143703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:00.289 [2024-11-19 06:48:52.143714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:25:00.289 [2024-11-19 06:48:52.143726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.289 [2024-11-19 06:48:52.208303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.289 [2024-11-19 06:48:52.208368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:00.289 [2024-11-19 06:48:52.208390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.556 ms 00:25:00.289 [2024-11-19 06:48:52.208400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.551 [2024-11-19 06:48:52.219372] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:00.551 [2024-11-19 06:48:52.222363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.551 [2024-11-19 06:48:52.222407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:00.551 [2024-11-19 06:48:52.222420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.909 ms 00:25:00.551 [2024-11-19 06:48:52.222429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.551 [2024-11-19 06:48:52.222518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.551 [2024-11-19 06:48:52.222530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:00.551 [2024-11-19 06:48:52.222540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:00.551 [2024-11-19 06:48:52.222551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.551 [2024-11-19 06:48:52.224288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.551 [2024-11-19 06:48:52.224337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:00.551 [2024-11-19 06:48:52.224348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:25:00.551 [2024-11-19 06:48:52.224357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.551 [2024-11-19 06:48:52.224386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.551 [2024-11-19 06:48:52.224395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:00.551 [2024-11-19 06:48:52.224405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:00.551 [2024-11-19 06:48:52.224413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.551 [2024-11-19 06:48:52.224456] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:00.551 [2024-11-19 06:48:52.224469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.551 [2024-11-19 06:48:52.224478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:00.551 [2024-11-19 06:48:52.224486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:00.551 [2024-11-19 06:48:52.224494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.551 [2024-11-19 06:48:52.249910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.551 [2024-11-19 06:48:52.249967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:00.551 [2024-11-19 06:48:52.249982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.396 ms 00:25:00.551 [2024-11-19 06:48:52.249996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.551 [2024-11-19 06:48:52.250086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.551 [2024-11-19 06:48:52.250097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:00.551 [2024-11-19 06:48:52.250107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:00.551 [2024-11-19 06:48:52.250115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.551 [2024-11-19 06:48:52.251374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.365 ms, result 0 00:25:01.937  [2024-11-19T06:48:54.440Z] Copying: 1016/1048576 [kB] (1016 kBps) [2024-11-19T06:48:55.828Z] Copying: 4044/1048576 [kB] (3028 kBps) [2024-11-19T06:48:56.773Z] Copying: 14072/1048576 [kB] (10028 kBps) [2024-11-19T06:48:57.717Z] Copying: 29/1024 [MB] (15 MBps) [2024-11-19T06:48:58.658Z] Copying: 45/1024 [MB] (15 MBps) [2024-11-19T06:48:59.599Z] Copying: 63/1024 [MB] (18 MBps) [2024-11-19T06:49:00.542Z] Copying: 86/1024 [MB] (22 MBps) [2024-11-19T06:49:01.482Z] Copying: 119/1024 [MB] (33 MBps) [2024-11-19T06:49:02.868Z] Copying: 139/1024 [MB] (19 MBps) [2024-11-19T06:49:03.441Z] Copying: 164/1024 [MB] (24 MBps) [2024-11-19T06:49:04.827Z] Copying: 201/1024 [MB] (37 MBps) [2024-11-19T06:49:05.770Z] Copying: 237/1024 [MB] (35 MBps) [2024-11-19T06:49:06.714Z] Copying: 272/1024 [MB] (35 MBps) [2024-11-19T06:49:07.658Z] Copying: 292/1024 [MB] (19 MBps) [2024-11-19T06:49:08.602Z] Copying: 313/1024 [MB] (21 MBps) [2024-11-19T06:49:09.546Z] Copying: 345/1024 [MB] (32 MBps) [2024-11-19T06:49:10.489Z] Copying: 366/1024 [MB] (20 MBps) [2024-11-19T06:49:11.434Z] Copying: 393/1024 [MB] (27 MBps) [2024-11-19T06:49:12.823Z] Copying: 419/1024 [MB] (25 MBps) [2024-11-19T06:49:13.768Z] Copying: 443/1024 [MB] (24 MBps) [2024-11-19T06:49:14.709Z] Copying: 465/1024 [MB] (22 MBps) [2024-11-19T06:49:15.656Z] Copying: 493/1024 [MB] (27 MBps) [2024-11-19T06:49:16.599Z] Copying: 517/1024 [MB] (24 MBps) [2024-11-19T06:49:17.543Z] Copying: 546/1024 [MB] (28 MBps) [2024-11-19T06:49:18.488Z] Copying: 569/1024 [MB] (23 MBps) [2024-11-19T06:49:19.433Z] Copying: 597/1024 [MB] (28 MBps) [2024-11-19T06:49:20.884Z] Copying: 620/1024 [MB] (22 MBps) [2024-11-19T06:49:21.456Z] Copying: 647/1024 [MB] (27 MBps) [2024-11-19T06:49:22.843Z] Copying: 675/1024 [MB] (27 MBps) [2024-11-19T06:49:23.788Z] Copying: 707/1024 [MB] (32 MBps) [2024-11-19T06:49:24.732Z] Copying: 731/1024 [MB] (23 MBps) [2024-11-19T06:49:25.675Z] Copying: 753/1024 [MB] (22 MBps) [2024-11-19T06:49:26.618Z] Copying: 769/1024 [MB] (15 MBps) [2024-11-19T06:49:27.563Z] Copying: 787/1024 [MB] (18 MBps) [2024-11-19T06:49:28.506Z] Copying: 807/1024 [MB] (19 MBps) [2024-11-19T06:49:29.450Z] Copying: 828/1024 [MB] (21 MBps) [2024-11-19T06:49:30.829Z] Copying: 858/1024 [MB] (29 MBps) [2024-11-19T06:49:31.774Z] Copying: 881/1024 [MB] (23 MBps) [2024-11-19T06:49:32.719Z] Copying: 901/1024 [MB] (19 MBps) [2024-11-19T06:49:33.655Z] Copying: 926/1024 [MB] (25 MBps) [2024-11-19T06:49:34.598Z] Copying: 952/1024 [MB] (26 MBps) [2024-11-19T06:49:35.540Z] Copying: 971/1024 [MB] (19 MBps) [2024-11-19T06:49:36.480Z] Copying: 998/1024 [MB] (27 MBps) [2024-11-19T06:49:36.740Z] Copying: 1017/1024 [MB] (18 MBps) [2024-11-19T06:49:37.313Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-19 06:49:37.023478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.023575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:45.384 [2024-11-19 06:49:37.023605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:45.384 [2024-11-19 06:49:37.023615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.023641] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:45.384 [2024-11-19 06:49:37.026950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.026998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:45.384 [2024-11-19 06:49:37.027010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.292 ms 00:25:45.384 [2024-11-19 06:49:37.027019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.027267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.027280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:45.384 [2024-11-19 06:49:37.027294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:25:45.384 [2024-11-19 06:49:37.027303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.041069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.041124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:45.384 [2024-11-19 06:49:37.041136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.748 ms 00:25:45.384 [2024-11-19 06:49:37.041146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.047680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.047742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:45.384 [2024-11-19 06:49:37.047754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.495 ms 00:25:45.384 [2024-11-19 06:49:37.047771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.076511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.076560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:45.384 [2024-11-19 06:49:37.076574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.673 ms 00:25:45.384 [2024-11-19 06:49:37.076583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.093504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.093553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:45.384 [2024-11-19 06:49:37.093565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.873 ms 00:25:45.384 [2024-11-19 06:49:37.093575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.098269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.098319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:45.384 [2024-11-19 06:49:37.098332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.643 ms 00:25:45.384 [2024-11-19 06:49:37.098341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.124371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.124422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:45.384 [2024-11-19 06:49:37.124434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.007 ms 00:25:45.384 [2024-11-19 06:49:37.124443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.149646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.149713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:45.384 [2024-11-19 06:49:37.149740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.157 ms 00:25:45.384 [2024-11-19 06:49:37.149748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.174606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.174656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:45.384 [2024-11-19 06:49:37.174669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.811 ms 00:25:45.384 [2024-11-19 06:49:37.174677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.199600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.384 [2024-11-19 06:49:37.199649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:45.384 [2024-11-19 06:49:37.199662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.848 ms 00:25:45.384 [2024-11-19 06:49:37.199670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.384 [2024-11-19 06:49:37.199714] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:45.384 [2024-11-19 06:49:37.199731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:45.384 [2024-11-19 06:49:37.199742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:45.384 [2024-11-19 06:49:37.199752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:45.384 [2024-11-19 06:49:37.199961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.199969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.199977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.199984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.199995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:45.385 [2024-11-19 06:49:37.200592] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:45.385 [2024-11-19 06:49:37.200602] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9a17d6d-b7e5-4bf1-931a-08f01203310e 00:25:45.385 [2024-11-19 06:49:37.200611] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:45.385 [2024-11-19 06:49:37.200619] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 160960 00:25:45.385 [2024-11-19 06:49:37.200627] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 158976 00:25:45.385 [2024-11-19 06:49:37.200643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:25:45.385 [2024-11-19 06:49:37.200651] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:45.385 [2024-11-19 06:49:37.200659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:45.386 [2024-11-19 06:49:37.200667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:45.386 [2024-11-19 06:49:37.200681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:45.386 [2024-11-19 06:49:37.200688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:45.386 [2024-11-19 06:49:37.200695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.386 [2024-11-19 06:49:37.200703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:45.386 [2024-11-19 06:49:37.200713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:25:45.386 [2024-11-19 06:49:37.200721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.386 [2024-11-19 06:49:37.214059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.386 [2024-11-19 06:49:37.214109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:45.386 [2024-11-19 06:49:37.214120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.305 ms 00:25:45.386 [2024-11-19 06:49:37.214129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.386 [2024-11-19 06:49:37.214514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.386 [2024-11-19 06:49:37.214533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:45.386 [2024-11-19 06:49:37.214543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:25:45.386 [2024-11-19 06:49:37.214551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.386 [2024-11-19 06:49:37.250970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.386 [2024-11-19 06:49:37.251017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.386 [2024-11-19 06:49:37.251028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.386 [2024-11-19 06:49:37.251037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.386 [2024-11-19 06:49:37.251099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.386 [2024-11-19 06:49:37.251108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.386 [2024-11-19 06:49:37.251118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.386 [2024-11-19 06:49:37.251126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.386 [2024-11-19 06:49:37.251210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.386 [2024-11-19 06:49:37.251228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.386 [2024-11-19 06:49:37.251236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.386 [2024-11-19 06:49:37.251246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.386 [2024-11-19 06:49:37.251263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.386 [2024-11-19 06:49:37.251271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.386 [2024-11-19 06:49:37.251279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.386 [2024-11-19 06:49:37.251286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.648 [2024-11-19 06:49:37.337762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.648 [2024-11-19 06:49:37.337823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.648 [2024-11-19 06:49:37.337837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.648 [2024-11-19 06:49:37.337846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.648 [2024-11-19 06:49:37.406573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.648 [2024-11-19 06:49:37.406631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.648 [2024-11-19 06:49:37.406644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.648 [2024-11-19 06:49:37.406653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.648 [2024-11-19 06:49:37.406709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.648 [2024-11-19 06:49:37.406719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.648 [2024-11-19 06:49:37.406734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.648 [2024-11-19 06:49:37.406743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.648 [2024-11-19 06:49:37.406800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.648 [2024-11-19 06:49:37.406812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.648 [2024-11-19 06:49:37.406821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.648 [2024-11-19 06:49:37.406830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.648 [2024-11-19 06:49:37.406951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.648 [2024-11-19 06:49:37.406964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.648 [2024-11-19 06:49:37.406973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.648 [2024-11-19 06:49:37.406985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.648 [2024-11-19 06:49:37.407019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.648 [2024-11-19 06:49:37.407029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:45.648 [2024-11-19 06:49:37.407044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.648 [2024-11-19 06:49:37.407052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.648 [2024-11-19 06:49:37.407094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.648 [2024-11-19 06:49:37.407105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.648 [2024-11-19 06:49:37.407114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.648 [2024-11-19 06:49:37.407125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.648 [2024-11-19 06:49:37.407172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.648 [2024-11-19 06:49:37.407183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.648 [2024-11-19 06:49:37.407192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.648 [2024-11-19 06:49:37.407200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.648 [2024-11-19 06:49:37.407338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 383.822 ms, result 0 00:25:46.221 00:25:46.221 00:25:46.482 06:49:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:48.390 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:48.390 06:49:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:48.390 [2024-11-19 06:49:40.318522] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:48.390 [2024-11-19 06:49:40.318655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79254 ] 00:25:48.652 [2024-11-19 06:49:40.491434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.913 [2024-11-19 06:49:40.615008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.177 [2024-11-19 06:49:40.905077] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.177 [2024-11-19 06:49:40.905163] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.177 [2024-11-19 06:49:41.067878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.067956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:49.177 [2024-11-19 06:49:41.067978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:49.177 [2024-11-19 06:49:41.067987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.068039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.068051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.177 [2024-11-19 06:49:41.068062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:49.177 [2024-11-19 06:49:41.068071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.068091] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:49.177 [2024-11-19 06:49:41.069170] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:49.177 [2024-11-19 06:49:41.069227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.069237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.177 [2024-11-19 06:49:41.069248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:25:49.177 [2024-11-19 06:49:41.069256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.071106] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:49.177 [2024-11-19 06:49:41.085506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.085557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:49.177 [2024-11-19 06:49:41.085570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.402 ms 00:25:49.177 [2024-11-19 06:49:41.085578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.085662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.085673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:49.177 [2024-11-19 06:49:41.085682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:49.177 [2024-11-19 06:49:41.085690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.093779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.093828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.177 [2024-11-19 06:49:41.093839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.008 ms 00:25:49.177 [2024-11-19 06:49:41.093847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.093952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.093963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.177 [2024-11-19 06:49:41.093971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:49.177 [2024-11-19 06:49:41.093980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.094026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.094036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:49.177 [2024-11-19 06:49:41.094045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:49.177 [2024-11-19 06:49:41.094053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.094078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:49.177 [2024-11-19 06:49:41.098023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.098059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:49.177 [2024-11-19 06:49:41.098072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.951 ms 00:25:49.177 [2024-11-19 06:49:41.098082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.098118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.098126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:49.177 [2024-11-19 06:49:41.098135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:49.177 [2024-11-19 06:49:41.098143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.098196] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:49.177 [2024-11-19 06:49:41.098219] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:49.177 [2024-11-19 06:49:41.098260] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:49.177 [2024-11-19 06:49:41.098280] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:49.177 [2024-11-19 06:49:41.098386] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:49.177 [2024-11-19 06:49:41.098398] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:49.177 [2024-11-19 06:49:41.098409] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:49.177 [2024-11-19 06:49:41.098420] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:49.177 [2024-11-19 06:49:41.098429] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:49.177 [2024-11-19 06:49:41.098438] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:49.177 [2024-11-19 06:49:41.098446] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:49.177 [2024-11-19 06:49:41.098455] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:49.177 [2024-11-19 06:49:41.098463] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:49.177 [2024-11-19 06:49:41.098475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.098483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:49.177 [2024-11-19 06:49:41.098492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:25:49.177 [2024-11-19 06:49:41.098500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.098583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.177 [2024-11-19 06:49:41.098593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:49.177 [2024-11-19 06:49:41.098601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:49.177 [2024-11-19 06:49:41.098610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.177 [2024-11-19 06:49:41.098714] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:49.177 [2024-11-19 06:49:41.098726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:49.177 [2024-11-19 06:49:41.098735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.177 [2024-11-19 06:49:41.098744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.177 [2024-11-19 06:49:41.098753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:49.177 [2024-11-19 06:49:41.098760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:49.177 [2024-11-19 06:49:41.098767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:49.177 [2024-11-19 06:49:41.098774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:49.177 [2024-11-19 06:49:41.098781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:49.177 [2024-11-19 06:49:41.098788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.177 [2024-11-19 06:49:41.098795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:49.177 [2024-11-19 06:49:41.098805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:49.177 [2024-11-19 06:49:41.098812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.177 [2024-11-19 06:49:41.098819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:49.177 [2024-11-19 06:49:41.098825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:49.177 [2024-11-19 06:49:41.098845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.177 [2024-11-19 06:49:41.098852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:49.177 [2024-11-19 06:49:41.098858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:49.177 [2024-11-19 06:49:41.098864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.177 [2024-11-19 06:49:41.098871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:49.177 [2024-11-19 06:49:41.098878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:49.177 [2024-11-19 06:49:41.098885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.177 [2024-11-19 06:49:41.098891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:49.177 [2024-11-19 06:49:41.098898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:49.177 [2024-11-19 06:49:41.098905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.177 [2024-11-19 06:49:41.098912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:49.177 [2024-11-19 06:49:41.098918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:49.177 [2024-11-19 06:49:41.098942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.177 [2024-11-19 06:49:41.098949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:49.177 [2024-11-19 06:49:41.098956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:49.178 [2024-11-19 06:49:41.098963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.178 [2024-11-19 06:49:41.098970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:49.178 [2024-11-19 06:49:41.098977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:49.178 [2024-11-19 06:49:41.098983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.178 [2024-11-19 06:49:41.098991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:49.178 [2024-11-19 06:49:41.098998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:49.178 [2024-11-19 06:49:41.099004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.178 [2024-11-19 06:49:41.099011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:49.178 [2024-11-19 06:49:41.099017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:49.178 [2024-11-19 06:49:41.099024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.178 [2024-11-19 06:49:41.099032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:49.178 [2024-11-19 06:49:41.099039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:49.178 [2024-11-19 06:49:41.099045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.178 [2024-11-19 06:49:41.099053] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:49.178 [2024-11-19 06:49:41.099062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:49.178 [2024-11-19 06:49:41.099069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.178 [2024-11-19 06:49:41.099077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.178 [2024-11-19 06:49:41.099085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:49.178 [2024-11-19 06:49:41.099092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:49.178 [2024-11-19 06:49:41.099099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:49.178 [2024-11-19 06:49:41.099107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:49.178 [2024-11-19 06:49:41.099113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:49.178 [2024-11-19 06:49:41.099120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:49.178 [2024-11-19 06:49:41.099128] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:49.178 [2024-11-19 06:49:41.099138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.178 [2024-11-19 06:49:41.099146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:49.178 [2024-11-19 06:49:41.099153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:49.178 [2024-11-19 06:49:41.099160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:49.178 [2024-11-19 06:49:41.099168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:49.178 [2024-11-19 06:49:41.099176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:49.178 [2024-11-19 06:49:41.099183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:49.178 [2024-11-19 06:49:41.099190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:49.178 [2024-11-19 06:49:41.099197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:49.178 [2024-11-19 06:49:41.099204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:49.178 [2024-11-19 06:49:41.099211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:49.178 [2024-11-19 06:49:41.099218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:49.178 [2024-11-19 06:49:41.099226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:49.178 [2024-11-19 06:49:41.099234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:49.178 [2024-11-19 06:49:41.099242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:49.178 [2024-11-19 06:49:41.099248] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:49.178 [2024-11-19 06:49:41.099260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.178 [2024-11-19 06:49:41.099268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:49.178 [2024-11-19 06:49:41.099283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:49.178 [2024-11-19 06:49:41.099291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:49.178 [2024-11-19 06:49:41.099298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:49.178 [2024-11-19 06:49:41.099306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.178 [2024-11-19 06:49:41.099316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:49.178 [2024-11-19 06:49:41.099325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:25:49.178 [2024-11-19 06:49:41.099332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.440 [2024-11-19 06:49:41.131781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.131830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:49.441 [2024-11-19 06:49:41.131842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.402 ms 00:25:49.441 [2024-11-19 06:49:41.131850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.131964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.131974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:49.441 [2024-11-19 06:49:41.131983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:49.441 [2024-11-19 06:49:41.131992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.181202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.181262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:49.441 [2024-11-19 06:49:41.181276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.150 ms 00:25:49.441 [2024-11-19 06:49:41.181284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.181334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.181344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:49.441 [2024-11-19 06:49:41.181354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:49.441 [2024-11-19 06:49:41.181365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.181995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.182037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:49.441 [2024-11-19 06:49:41.182048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:25:49.441 [2024-11-19 06:49:41.182057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.182218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.182236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:49.441 [2024-11-19 06:49:41.182245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:25:49.441 [2024-11-19 06:49:41.182260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.198100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.198150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:49.441 [2024-11-19 06:49:41.198164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.819 ms 00:25:49.441 [2024-11-19 06:49:41.198172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.212467] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:49.441 [2024-11-19 06:49:41.212520] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:49.441 [2024-11-19 06:49:41.212534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.212542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:49.441 [2024-11-19 06:49:41.212553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.250 ms 00:25:49.441 [2024-11-19 06:49:41.212560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.238653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.238716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:49.441 [2024-11-19 06:49:41.238728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.037 ms 00:25:49.441 [2024-11-19 06:49:41.238737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.251888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.251952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:49.441 [2024-11-19 06:49:41.251966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.091 ms 00:25:49.441 [2024-11-19 06:49:41.251973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.264507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.264556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:49.441 [2024-11-19 06:49:41.264568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.486 ms 00:25:49.441 [2024-11-19 06:49:41.264575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.265235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.265271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:49.441 [2024-11-19 06:49:41.265282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:25:49.441 [2024-11-19 06:49:41.265294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.330695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.330769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:49.441 [2024-11-19 06:49:41.330792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.381 ms 00:25:49.441 [2024-11-19 06:49:41.330801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.343058] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:49.441 [2024-11-19 06:49:41.346216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.346260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:49.441 [2024-11-19 06:49:41.346272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.354 ms 00:25:49.441 [2024-11-19 06:49:41.346281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.346371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.346383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:49.441 [2024-11-19 06:49:41.346392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:49.441 [2024-11-19 06:49:41.346404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.347313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.347366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:49.441 [2024-11-19 06:49:41.347379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:25:49.441 [2024-11-19 06:49:41.347388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.347421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.347431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:49.441 [2024-11-19 06:49:41.347442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:49.441 [2024-11-19 06:49:41.347451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.441 [2024-11-19 06:49:41.347497] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:49.441 [2024-11-19 06:49:41.347512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.441 [2024-11-19 06:49:41.347522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:49.441 [2024-11-19 06:49:41.347532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:49.441 [2024-11-19 06:49:41.347557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.703 [2024-11-19 06:49:41.374310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.703 [2024-11-19 06:49:41.374356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:49.703 [2024-11-19 06:49:41.374371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.732 ms 00:25:49.703 [2024-11-19 06:49:41.374386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.703 [2024-11-19 06:49:41.374478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.703 [2024-11-19 06:49:41.374489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:49.703 [2024-11-19 06:49:41.374499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:49.703 [2024-11-19 06:49:41.374509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.703 [2024-11-19 06:49:41.375787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.384 ms, result 0 00:25:50.647  [2024-11-19T06:49:43.958Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-19T06:49:44.903Z] Copying: 35/1024 [MB] (19 MBps) [2024-11-19T06:49:45.846Z] Copying: 61/1024 [MB] (26 MBps) [2024-11-19T06:49:46.788Z] Copying: 77/1024 [MB] (15 MBps) [2024-11-19T06:49:47.732Z] Copying: 92/1024 [MB] (15 MBps) [2024-11-19T06:49:48.673Z] Copying: 105/1024 [MB] (12 MBps) [2024-11-19T06:49:49.614Z] Copying: 122/1024 [MB] (17 MBps) [2024-11-19T06:49:50.996Z] Copying: 134/1024 [MB] (12 MBps) [2024-11-19T06:49:51.627Z] Copying: 145/1024 [MB] (11 MBps) [2024-11-19T06:49:52.569Z] Copying: 157/1024 [MB] (11 MBps) [2024-11-19T06:49:53.954Z] Copying: 169/1024 [MB] (11 MBps) [2024-11-19T06:49:54.895Z] Copying: 180/1024 [MB] (11 MBps) [2024-11-19T06:49:55.836Z] Copying: 194/1024 [MB] (14 MBps) [2024-11-19T06:49:56.784Z] Copying: 206/1024 [MB] (11 MBps) [2024-11-19T06:49:57.722Z] Copying: 219/1024 [MB] (13 MBps) [2024-11-19T06:49:58.663Z] Copying: 236/1024 [MB] (16 MBps) [2024-11-19T06:49:59.597Z] Copying: 252/1024 [MB] (16 MBps) [2024-11-19T06:50:00.981Z] Copying: 263/1024 [MB] (10 MBps) [2024-11-19T06:50:01.918Z] Copying: 280/1024 [MB] (17 MBps) [2024-11-19T06:50:02.854Z] Copying: 291/1024 [MB] (10 MBps) [2024-11-19T06:50:03.798Z] Copying: 301/1024 [MB] (10 MBps) [2024-11-19T06:50:04.740Z] Copying: 323/1024 [MB] (21 MBps) [2024-11-19T06:50:05.681Z] Copying: 334/1024 [MB] (11 MBps) [2024-11-19T06:50:06.624Z] Copying: 348/1024 [MB] (13 MBps) [2024-11-19T06:50:07.567Z] Copying: 363/1024 [MB] (15 MBps) [2024-11-19T06:50:08.953Z] Copying: 374/1024 [MB] (10 MBps) [2024-11-19T06:50:09.895Z] Copying: 386/1024 [MB] (12 MBps) [2024-11-19T06:50:10.839Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-19T06:50:11.782Z] Copying: 408/1024 [MB] (11 MBps) [2024-11-19T06:50:12.723Z] Copying: 419/1024 [MB] (10 MBps) [2024-11-19T06:50:13.667Z] Copying: 430/1024 [MB] (11 MBps) [2024-11-19T06:50:14.611Z] Copying: 441/1024 [MB] (10 MBps) [2024-11-19T06:50:15.993Z] Copying: 452/1024 [MB] (10 MBps) [2024-11-19T06:50:16.566Z] Copying: 472/1024 [MB] (20 MBps) [2024-11-19T06:50:17.954Z] Copying: 483/1024 [MB] (11 MBps) [2024-11-19T06:50:18.898Z] Copying: 497/1024 [MB] (13 MBps) [2024-11-19T06:50:19.837Z] Copying: 507/1024 [MB] (10 MBps) [2024-11-19T06:50:20.782Z] Copying: 521/1024 [MB] (14 MBps) [2024-11-19T06:50:21.724Z] Copying: 533/1024 [MB] (11 MBps) [2024-11-19T06:50:22.670Z] Copying: 545/1024 [MB] (12 MBps) [2024-11-19T06:50:23.746Z] Copying: 563/1024 [MB] (18 MBps) [2024-11-19T06:50:24.683Z] Copying: 575/1024 [MB] (12 MBps) [2024-11-19T06:50:25.625Z] Copying: 590/1024 [MB] (14 MBps) [2024-11-19T06:50:26.562Z] Copying: 603/1024 [MB] (13 MBps) [2024-11-19T06:50:27.943Z] Copying: 617/1024 [MB] (13 MBps) [2024-11-19T06:50:28.879Z] Copying: 629/1024 [MB] (11 MBps) [2024-11-19T06:50:29.819Z] Copying: 643/1024 [MB] (13 MBps) [2024-11-19T06:50:30.761Z] Copying: 660/1024 [MB] (17 MBps) [2024-11-19T06:50:31.702Z] Copying: 677/1024 [MB] (16 MBps) [2024-11-19T06:50:32.642Z] Copying: 689/1024 [MB] (11 MBps) [2024-11-19T06:50:33.584Z] Copying: 707/1024 [MB] (18 MBps) [2024-11-19T06:50:34.961Z] Copying: 718/1024 [MB] (11 MBps) [2024-11-19T06:50:35.897Z] Copying: 744/1024 [MB] (25 MBps) [2024-11-19T06:50:36.834Z] Copying: 764/1024 [MB] (19 MBps) [2024-11-19T06:50:37.775Z] Copying: 780/1024 [MB] (16 MBps) [2024-11-19T06:50:38.714Z] Copying: 795/1024 [MB] (15 MBps) [2024-11-19T06:50:39.656Z] Copying: 808/1024 [MB] (12 MBps) [2024-11-19T06:50:40.596Z] Copying: 818/1024 [MB] (10 MBps) [2024-11-19T06:50:41.984Z] Copying: 835/1024 [MB] (17 MBps) [2024-11-19T06:50:42.924Z] Copying: 846/1024 [MB] (11 MBps) [2024-11-19T06:50:43.869Z] Copying: 857/1024 [MB] (10 MBps) [2024-11-19T06:50:44.815Z] Copying: 876/1024 [MB] (19 MBps) [2024-11-19T06:50:45.757Z] Copying: 895/1024 [MB] (18 MBps) [2024-11-19T06:50:46.699Z] Copying: 905/1024 [MB] (10 MBps) [2024-11-19T06:50:47.640Z] Copying: 920/1024 [MB] (14 MBps) [2024-11-19T06:50:48.575Z] Copying: 933/1024 [MB] (13 MBps) [2024-11-19T06:50:49.963Z] Copying: 950/1024 [MB] (16 MBps) [2024-11-19T06:50:50.558Z] Copying: 960/1024 [MB] (10 MBps) [2024-11-19T06:50:51.933Z] Copying: 977/1024 [MB] (16 MBps) [2024-11-19T06:50:52.868Z] Copying: 994/1024 [MB] (17 MBps) [2024-11-19T06:50:53.435Z] Copying: 1011/1024 [MB] (16 MBps) [2024-11-19T06:50:53.695Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-19 06:50:53.446601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.766 [2024-11-19 06:50:53.446665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:01.767 [2024-11-19 06:50:53.446679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:01.767 [2024-11-19 06:50:53.446689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.446712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:01.767 [2024-11-19 06:50:53.449749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.449786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:01.767 [2024-11-19 06:50:53.449802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:27:01.767 [2024-11-19 06:50:53.449810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.450050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.450062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:01.767 [2024-11-19 06:50:53.450072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:27:01.767 [2024-11-19 06:50:53.450080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.453835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.453859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:01.767 [2024-11-19 06:50:53.453870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.741 ms 00:27:01.767 [2024-11-19 06:50:53.453880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.461110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.461140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:01.767 [2024-11-19 06:50:53.461151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.206 ms 00:27:01.767 [2024-11-19 06:50:53.461160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.481466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.481495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:01.767 [2024-11-19 06:50:53.481505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.246 ms 00:27:01.767 [2024-11-19 06:50:53.481511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.493512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.493541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:01.767 [2024-11-19 06:50:53.493551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.972 ms 00:27:01.767 [2024-11-19 06:50:53.493558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.497482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.497511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:01.767 [2024-11-19 06:50:53.497519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:27:01.767 [2024-11-19 06:50:53.497525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.516236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.516261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:01.767 [2024-11-19 06:50:53.516269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.699 ms 00:27:01.767 [2024-11-19 06:50:53.516275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.534845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.534874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:01.767 [2024-11-19 06:50:53.534881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.545 ms 00:27:01.767 [2024-11-19 06:50:53.534887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.552154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.552178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:01.767 [2024-11-19 06:50:53.552185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.242 ms 00:27:01.767 [2024-11-19 06:50:53.552191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.569954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.767 [2024-11-19 06:50:53.569979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:01.767 [2024-11-19 06:50:53.569986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.711 ms 00:27:01.767 [2024-11-19 06:50:53.569992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.767 [2024-11-19 06:50:53.570016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:01.767 [2024-11-19 06:50:53.570027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:01.767 [2024-11-19 06:50:53.570038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:01.767 [2024-11-19 06:50:53.570045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:01.767 [2024-11-19 06:50:53.570296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:01.768 [2024-11-19 06:50:53.570615] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:01.768 [2024-11-19 06:50:53.570623] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9a17d6d-b7e5-4bf1-931a-08f01203310e 00:27:01.768 [2024-11-19 06:50:53.570630] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:01.768 [2024-11-19 06:50:53.570635] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:01.768 [2024-11-19 06:50:53.570641] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:01.768 [2024-11-19 06:50:53.570647] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:01.768 [2024-11-19 06:50:53.570653] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:01.768 [2024-11-19 06:50:53.570658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:01.768 [2024-11-19 06:50:53.570669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:01.768 [2024-11-19 06:50:53.570674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:01.768 [2024-11-19 06:50:53.570679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:01.768 [2024-11-19 06:50:53.570685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.768 [2024-11-19 06:50:53.570691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:01.768 [2024-11-19 06:50:53.570698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:27:01.768 [2024-11-19 06:50:53.570703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.768 [2024-11-19 06:50:53.580200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.768 [2024-11-19 06:50:53.580223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:01.768 [2024-11-19 06:50:53.580230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.483 ms 00:27:01.768 [2024-11-19 06:50:53.580236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.768 [2024-11-19 06:50:53.580497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.768 [2024-11-19 06:50:53.580506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:01.768 [2024-11-19 06:50:53.580516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:27:01.768 [2024-11-19 06:50:53.580521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.768 [2024-11-19 06:50:53.606342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.768 [2024-11-19 06:50:53.606368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:01.768 [2024-11-19 06:50:53.606376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.768 [2024-11-19 06:50:53.606382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.768 [2024-11-19 06:50:53.606419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.768 [2024-11-19 06:50:53.606426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:01.768 [2024-11-19 06:50:53.606435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.768 [2024-11-19 06:50:53.606441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.768 [2024-11-19 06:50:53.606480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.768 [2024-11-19 06:50:53.606489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:01.768 [2024-11-19 06:50:53.606495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.768 [2024-11-19 06:50:53.606501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.768 [2024-11-19 06:50:53.606512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.768 [2024-11-19 06:50:53.606519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:01.768 [2024-11-19 06:50:53.606526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.768 [2024-11-19 06:50:53.606534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.768 [2024-11-19 06:50:53.665730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.768 [2024-11-19 06:50:53.665762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:01.768 [2024-11-19 06:50:53.665771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.769 [2024-11-19 06:50:53.665776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.027 [2024-11-19 06:50:53.714017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.027 [2024-11-19 06:50:53.714049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:02.027 [2024-11-19 06:50:53.714057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.027 [2024-11-19 06:50:53.714067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.027 [2024-11-19 06:50:53.714114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.027 [2024-11-19 06:50:53.714122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:02.027 [2024-11-19 06:50:53.714128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.027 [2024-11-19 06:50:53.714134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.027 [2024-11-19 06:50:53.714160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.027 [2024-11-19 06:50:53.714167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:02.027 [2024-11-19 06:50:53.714172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.027 [2024-11-19 06:50:53.714178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.027 [2024-11-19 06:50:53.714246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.027 [2024-11-19 06:50:53.714255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:02.027 [2024-11-19 06:50:53.714261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.027 [2024-11-19 06:50:53.714267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.027 [2024-11-19 06:50:53.714287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.027 [2024-11-19 06:50:53.714294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:02.027 [2024-11-19 06:50:53.714300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.027 [2024-11-19 06:50:53.714307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.027 [2024-11-19 06:50:53.714336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.027 [2024-11-19 06:50:53.714343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:02.027 [2024-11-19 06:50:53.714349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.027 [2024-11-19 06:50:53.714355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.027 [2024-11-19 06:50:53.714383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.027 [2024-11-19 06:50:53.714391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:02.027 [2024-11-19 06:50:53.714398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.027 [2024-11-19 06:50:53.714405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.027 [2024-11-19 06:50:53.714492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.877 ms, result 0 00:27:02.658 00:27:02.658 00:27:02.658 06:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:04.568 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:04.568 06:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:04.568 06:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:04.568 06:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:04.568 06:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:04.827 06:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:04.827 06:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:04.827 06:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:04.827 Process with pid 77442 is not found 00:27:04.827 06:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77442 00:27:04.827 06:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77442 ']' 00:27:04.827 06:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 77442 00:27:04.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77442) - No such process 00:27:04.827 06:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 77442 is not found' 00:27:04.827 06:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:05.088 Remove shared memory files 00:27:05.088 06:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:05.088 06:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:05.088 06:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:05.088 06:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:05.088 06:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:05.088 06:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:05.088 06:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:05.088 ************************************ 00:27:05.088 END TEST ftl_dirty_shutdown 00:27:05.088 ************************************ 00:27:05.088 00:27:05.088 real 4m7.027s 00:27:05.088 user 4m34.585s 00:27:05.088 sys 0m26.808s 00:27:05.089 06:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.089 06:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:05.347 06:50:57 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:05.347 06:50:57 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:05.347 06:50:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.347 06:50:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:05.347 ************************************ 00:27:05.348 START TEST ftl_upgrade_shutdown 00:27:05.348 ************************************ 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:05.348 * Looking for test storage... 00:27:05.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:05.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.348 --rc genhtml_branch_coverage=1 00:27:05.348 --rc genhtml_function_coverage=1 00:27:05.348 --rc genhtml_legend=1 00:27:05.348 --rc geninfo_all_blocks=1 00:27:05.348 --rc geninfo_unexecuted_blocks=1 00:27:05.348 00:27:05.348 ' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:05.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.348 --rc genhtml_branch_coverage=1 00:27:05.348 --rc genhtml_function_coverage=1 00:27:05.348 --rc genhtml_legend=1 00:27:05.348 --rc geninfo_all_blocks=1 00:27:05.348 --rc geninfo_unexecuted_blocks=1 00:27:05.348 00:27:05.348 ' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:05.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.348 --rc genhtml_branch_coverage=1 00:27:05.348 --rc genhtml_function_coverage=1 00:27:05.348 --rc genhtml_legend=1 00:27:05.348 --rc geninfo_all_blocks=1 00:27:05.348 --rc geninfo_unexecuted_blocks=1 00:27:05.348 00:27:05.348 ' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:05.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.348 --rc genhtml_branch_coverage=1 00:27:05.348 --rc genhtml_function_coverage=1 00:27:05.348 --rc genhtml_legend=1 00:27:05.348 --rc geninfo_all_blocks=1 00:27:05.348 --rc geninfo_unexecuted_blocks=1 00:27:05.348 00:27:05.348 ' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80111 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80111 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80111 ']' 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:05.348 06:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:05.607 [2024-11-19 06:50:57.342225] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:05.607 [2024-11-19 06:50:57.342348] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80111 ] 00:27:05.607 [2024-11-19 06:50:57.500097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.865 [2024-11-19 06:50:57.574748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:06.432 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:06.691 { 00:27:06.691 "name": "basen1", 00:27:06.691 "aliases": [ 00:27:06.691 "ce06edcf-5e76-4cb9-b7f1-dd047d2a3952" 00:27:06.691 ], 00:27:06.691 "product_name": "NVMe disk", 00:27:06.691 "block_size": 4096, 00:27:06.691 "num_blocks": 1310720, 00:27:06.691 "uuid": "ce06edcf-5e76-4cb9-b7f1-dd047d2a3952", 00:27:06.691 "numa_id": -1, 00:27:06.691 "assigned_rate_limits": { 00:27:06.691 "rw_ios_per_sec": 0, 00:27:06.691 "rw_mbytes_per_sec": 0, 00:27:06.691 "r_mbytes_per_sec": 0, 00:27:06.691 "w_mbytes_per_sec": 0 00:27:06.691 }, 00:27:06.691 "claimed": true, 00:27:06.691 "claim_type": "read_many_write_one", 00:27:06.691 "zoned": false, 00:27:06.691 "supported_io_types": { 00:27:06.691 "read": true, 00:27:06.691 "write": true, 00:27:06.691 "unmap": true, 00:27:06.691 "flush": true, 00:27:06.691 "reset": true, 00:27:06.691 "nvme_admin": true, 00:27:06.691 "nvme_io": true, 00:27:06.691 "nvme_io_md": false, 00:27:06.691 "write_zeroes": true, 00:27:06.691 "zcopy": false, 00:27:06.691 "get_zone_info": false, 00:27:06.691 "zone_management": false, 00:27:06.691 "zone_append": false, 00:27:06.691 "compare": true, 00:27:06.691 "compare_and_write": false, 00:27:06.691 "abort": true, 00:27:06.691 "seek_hole": false, 00:27:06.691 "seek_data": false, 00:27:06.691 "copy": true, 00:27:06.691 "nvme_iov_md": false 00:27:06.691 }, 00:27:06.691 "driver_specific": { 00:27:06.691 "nvme": [ 00:27:06.691 { 00:27:06.691 "pci_address": "0000:00:11.0", 00:27:06.691 "trid": { 00:27:06.691 "trtype": "PCIe", 00:27:06.691 "traddr": "0000:00:11.0" 00:27:06.691 }, 00:27:06.691 "ctrlr_data": { 00:27:06.691 "cntlid": 0, 00:27:06.691 "vendor_id": "0x1b36", 00:27:06.691 "model_number": "QEMU NVMe Ctrl", 00:27:06.691 "serial_number": "12341", 00:27:06.691 "firmware_revision": "8.0.0", 00:27:06.691 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:06.691 "oacs": { 00:27:06.691 "security": 0, 00:27:06.691 "format": 1, 00:27:06.691 "firmware": 0, 00:27:06.691 "ns_manage": 1 00:27:06.691 }, 00:27:06.691 "multi_ctrlr": false, 00:27:06.691 "ana_reporting": false 00:27:06.691 }, 00:27:06.691 "vs": { 00:27:06.691 "nvme_version": "1.4" 00:27:06.691 }, 00:27:06.691 "ns_data": { 00:27:06.691 "id": 1, 00:27:06.691 "can_share": false 00:27:06.691 } 00:27:06.691 } 00:27:06.691 ], 00:27:06.691 "mp_policy": "active_passive" 00:27:06.691 } 00:27:06.691 } 00:27:06.691 ]' 00:27:06.691 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=7e084fbc-e64d-4698-bd82-81d41866e727 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:06.950 06:50:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e084fbc-e64d-4698-bd82-81d41866e727 00:27:07.208 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:07.478 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=89f83c47-59dd-4b73-8813-7b6ec9d7a61a 00:27:07.479 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 89f83c47-59dd-4b73-8813-7b6ec9d7a61a 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=fdd55e6c-d96c-46dd-9b92-1c8b961d082b 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z fdd55e6c-d96c-46dd-9b92-1c8b961d082b ]] 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 fdd55e6c-d96c-46dd-9b92-1c8b961d082b 5120 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=fdd55e6c-d96c-46dd-9b92-1c8b961d082b 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size fdd55e6c-d96c-46dd-9b92-1c8b961d082b 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fdd55e6c-d96c-46dd-9b92-1c8b961d082b 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:07.739 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdd55e6c-d96c-46dd-9b92-1c8b961d082b 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:07.998 { 00:27:07.998 "name": "fdd55e6c-d96c-46dd-9b92-1c8b961d082b", 00:27:07.998 "aliases": [ 00:27:07.998 "lvs/basen1p0" 00:27:07.998 ], 00:27:07.998 "product_name": "Logical Volume", 00:27:07.998 "block_size": 4096, 00:27:07.998 "num_blocks": 5242880, 00:27:07.998 "uuid": "fdd55e6c-d96c-46dd-9b92-1c8b961d082b", 00:27:07.998 "assigned_rate_limits": { 00:27:07.998 "rw_ios_per_sec": 0, 00:27:07.998 "rw_mbytes_per_sec": 0, 00:27:07.998 "r_mbytes_per_sec": 0, 00:27:07.998 "w_mbytes_per_sec": 0 00:27:07.998 }, 00:27:07.998 "claimed": false, 00:27:07.998 "zoned": false, 00:27:07.998 "supported_io_types": { 00:27:07.998 "read": true, 00:27:07.998 "write": true, 00:27:07.998 "unmap": true, 00:27:07.998 "flush": false, 00:27:07.998 "reset": true, 00:27:07.998 "nvme_admin": false, 00:27:07.998 "nvme_io": false, 00:27:07.998 "nvme_io_md": false, 00:27:07.998 "write_zeroes": true, 00:27:07.998 "zcopy": false, 00:27:07.998 "get_zone_info": false, 00:27:07.998 "zone_management": false, 00:27:07.998 "zone_append": false, 00:27:07.998 "compare": false, 00:27:07.998 "compare_and_write": false, 00:27:07.998 "abort": false, 00:27:07.998 "seek_hole": true, 00:27:07.998 "seek_data": true, 00:27:07.998 "copy": false, 00:27:07.998 "nvme_iov_md": false 00:27:07.998 }, 00:27:07.998 "driver_specific": { 00:27:07.998 "lvol": { 00:27:07.998 "lvol_store_uuid": "89f83c47-59dd-4b73-8813-7b6ec9d7a61a", 00:27:07.998 "base_bdev": "basen1", 00:27:07.998 "thin_provision": true, 00:27:07.998 "num_allocated_clusters": 0, 00:27:07.998 "snapshot": false, 00:27:07.998 "clone": false, 00:27:07.998 "esnap_clone": false 00:27:07.998 } 00:27:07.998 } 00:27:07.998 } 00:27:07.998 ]' 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:07.998 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:08.256 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:08.256 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:08.256 06:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:08.515 06:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:08.515 06:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:08.515 06:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d fdd55e6c-d96c-46dd-9b92-1c8b961d082b -c cachen1p0 --l2p_dram_limit 2 00:27:08.515 [2024-11-19 06:51:00.376844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.515 [2024-11-19 06:51:00.376883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:08.515 [2024-11-19 06:51:00.376895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:08.515 [2024-11-19 06:51:00.376902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.515 [2024-11-19 06:51:00.376954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.515 [2024-11-19 06:51:00.376962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:08.515 [2024-11-19 06:51:00.376971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:08.515 [2024-11-19 06:51:00.376976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.515 [2024-11-19 06:51:00.376992] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:08.515 [2024-11-19 06:51:00.377500] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:08.515 [2024-11-19 06:51:00.377521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.515 [2024-11-19 06:51:00.377527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:08.515 [2024-11-19 06:51:00.377535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.530 ms 00:27:08.515 [2024-11-19 06:51:00.377540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.515 [2024-11-19 06:51:00.377587] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f6eed4e1-a0be-4bdb-b11b-a2b0a2e02fa9 00:27:08.515 [2024-11-19 06:51:00.378492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.515 [2024-11-19 06:51:00.378514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:08.515 [2024-11-19 06:51:00.378521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:08.515 [2024-11-19 06:51:00.378528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.515 [2024-11-19 06:51:00.383157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.515 [2024-11-19 06:51:00.383185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:08.515 [2024-11-19 06:51:00.383194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.591 ms 00:27:08.515 [2024-11-19 06:51:00.383201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.515 [2024-11-19 06:51:00.383229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.515 [2024-11-19 06:51:00.383237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:08.515 [2024-11-19 06:51:00.383243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:08.515 [2024-11-19 06:51:00.383252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.516 [2024-11-19 06:51:00.383284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.516 [2024-11-19 06:51:00.383293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:08.516 [2024-11-19 06:51:00.383299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:08.516 [2024-11-19 06:51:00.383309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.516 [2024-11-19 06:51:00.383324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:08.516 [2024-11-19 06:51:00.386144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.516 [2024-11-19 06:51:00.386169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:08.516 [2024-11-19 06:51:00.386179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.822 ms 00:27:08.516 [2024-11-19 06:51:00.386185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.516 [2024-11-19 06:51:00.386204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.516 [2024-11-19 06:51:00.386211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:08.516 [2024-11-19 06:51:00.386219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:08.516 [2024-11-19 06:51:00.386224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.516 [2024-11-19 06:51:00.386238] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:08.516 [2024-11-19 06:51:00.386340] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:08.516 [2024-11-19 06:51:00.386352] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:08.516 [2024-11-19 06:51:00.386361] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:08.516 [2024-11-19 06:51:00.386371] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386378] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386385] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:08.516 [2024-11-19 06:51:00.386391] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:08.516 [2024-11-19 06:51:00.386399] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:08.516 [2024-11-19 06:51:00.386404] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:08.516 [2024-11-19 06:51:00.386411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.516 [2024-11-19 06:51:00.386416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:08.516 [2024-11-19 06:51:00.386424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:27:08.516 [2024-11-19 06:51:00.386429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.516 [2024-11-19 06:51:00.386493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.516 [2024-11-19 06:51:00.386499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:08.516 [2024-11-19 06:51:00.386507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:08.516 [2024-11-19 06:51:00.386517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.516 [2024-11-19 06:51:00.386596] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:08.516 [2024-11-19 06:51:00.386603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:08.516 [2024-11-19 06:51:00.386611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:08.516 [2024-11-19 06:51:00.386629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:08.516 [2024-11-19 06:51:00.386641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:08.516 [2024-11-19 06:51:00.386648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:08.516 [2024-11-19 06:51:00.386653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:08.516 [2024-11-19 06:51:00.386665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:08.516 [2024-11-19 06:51:00.386672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:08.516 [2024-11-19 06:51:00.386684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:08.516 [2024-11-19 06:51:00.386689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:08.516 [2024-11-19 06:51:00.386703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:08.516 [2024-11-19 06:51:00.386711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:08.516 [2024-11-19 06:51:00.386723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:08.516 [2024-11-19 06:51:00.386728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:08.516 [2024-11-19 06:51:00.386740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:08.516 [2024-11-19 06:51:00.386746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:08.516 [2024-11-19 06:51:00.386758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:08.516 [2024-11-19 06:51:00.386763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:08.516 [2024-11-19 06:51:00.386775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:08.516 [2024-11-19 06:51:00.386781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:08.516 [2024-11-19 06:51:00.386794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:08.516 [2024-11-19 06:51:00.386799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:08.516 [2024-11-19 06:51:00.386811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:08.516 [2024-11-19 06:51:00.386828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:08.516 [2024-11-19 06:51:00.386844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:08.516 [2024-11-19 06:51:00.386850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386854] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:08.516 [2024-11-19 06:51:00.386861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:08.516 [2024-11-19 06:51:00.386867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.516 [2024-11-19 06:51:00.386880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:08.516 [2024-11-19 06:51:00.386888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:08.516 [2024-11-19 06:51:00.386894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:08.516 [2024-11-19 06:51:00.386901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:08.516 [2024-11-19 06:51:00.386905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:08.516 [2024-11-19 06:51:00.386912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:08.516 [2024-11-19 06:51:00.386919] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:08.516 [2024-11-19 06:51:00.386944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.516 [2024-11-19 06:51:00.386952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:08.516 [2024-11-19 06:51:00.386959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:08.516 [2024-11-19 06:51:00.386965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:08.516 [2024-11-19 06:51:00.386973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:08.516 [2024-11-19 06:51:00.386978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:08.516 [2024-11-19 06:51:00.386985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:08.516 [2024-11-19 06:51:00.386992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:08.516 [2024-11-19 06:51:00.386999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:08.516 [2024-11-19 06:51:00.387004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:08.516 [2024-11-19 06:51:00.387012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:08.516 [2024-11-19 06:51:00.387018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:08.516 [2024-11-19 06:51:00.387026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:08.516 [2024-11-19 06:51:00.387031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:08.517 [2024-11-19 06:51:00.387040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:08.517 [2024-11-19 06:51:00.387045] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:08.517 [2024-11-19 06:51:00.387052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.517 [2024-11-19 06:51:00.387059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:08.517 [2024-11-19 06:51:00.387066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:08.517 [2024-11-19 06:51:00.387072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:08.517 [2024-11-19 06:51:00.387079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:08.517 [2024-11-19 06:51:00.387084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.517 [2024-11-19 06:51:00.387091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:08.517 [2024-11-19 06:51:00.387097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:27:08.517 [2024-11-19 06:51:00.387103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.517 [2024-11-19 06:51:00.387132] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:08.517 [2024-11-19 06:51:00.387142] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:12.720 [2024-11-19 06:51:04.305548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.305639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:12.720 [2024-11-19 06:51:04.305659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3918.399 ms 00:27:12.720 [2024-11-19 06:51:04.305671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.336768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.336834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:12.720 [2024-11-19 06:51:04.336849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.853 ms 00:27:12.720 [2024-11-19 06:51:04.336860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.336962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.336978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:12.720 [2024-11-19 06:51:04.336987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:12.720 [2024-11-19 06:51:04.337000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.372240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.372296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:12.720 [2024-11-19 06:51:04.372308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.198 ms 00:27:12.720 [2024-11-19 06:51:04.372319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.372354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.372370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:12.720 [2024-11-19 06:51:04.372379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:12.720 [2024-11-19 06:51:04.372389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.372985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.373020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:12.720 [2024-11-19 06:51:04.373031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:27:12.720 [2024-11-19 06:51:04.373042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.373094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.373106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:12.720 [2024-11-19 06:51:04.373118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:12.720 [2024-11-19 06:51:04.373133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.390715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.390765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:12.720 [2024-11-19 06:51:04.390777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.562 ms 00:27:12.720 [2024-11-19 06:51:04.390787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.404499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:12.720 [2024-11-19 06:51:04.405836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.405876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:12.720 [2024-11-19 06:51:04.405890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.963 ms 00:27:12.720 [2024-11-19 06:51:04.405898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.450820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.450865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:12.720 [2024-11-19 06:51:04.450879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.865 ms 00:27:12.720 [2024-11-19 06:51:04.450886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.720 [2024-11-19 06:51:04.450985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.720 [2024-11-19 06:51:04.450998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:12.720 [2024-11-19 06:51:04.451011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:12.720 [2024-11-19 06:51:04.451018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.469963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.721 [2024-11-19 06:51:04.469998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:12.721 [2024-11-19 06:51:04.470010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.903 ms 00:27:12.721 [2024-11-19 06:51:04.470017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.488502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.721 [2024-11-19 06:51:04.488534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:12.721 [2024-11-19 06:51:04.488544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.446 ms 00:27:12.721 [2024-11-19 06:51:04.488550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.489037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.721 [2024-11-19 06:51:04.489094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:12.721 [2024-11-19 06:51:04.489103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.453 ms 00:27:12.721 [2024-11-19 06:51:04.489110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.553444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.721 [2024-11-19 06:51:04.553474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:12.721 [2024-11-19 06:51:04.553486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 64.293 ms 00:27:12.721 [2024-11-19 06:51:04.553493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.572650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.721 [2024-11-19 06:51:04.572680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:12.721 [2024-11-19 06:51:04.572694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.100 ms 00:27:12.721 [2024-11-19 06:51:04.572701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.590813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.721 [2024-11-19 06:51:04.590838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:12.721 [2024-11-19 06:51:04.590847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.082 ms 00:27:12.721 [2024-11-19 06:51:04.590853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.609577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.721 [2024-11-19 06:51:04.609604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:12.721 [2024-11-19 06:51:04.609613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.695 ms 00:27:12.721 [2024-11-19 06:51:04.609619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.609650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.721 [2024-11-19 06:51:04.609658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:12.721 [2024-11-19 06:51:04.609667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:12.721 [2024-11-19 06:51:04.609673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.609732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.721 [2024-11-19 06:51:04.609740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:12.721 [2024-11-19 06:51:04.609749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:12.721 [2024-11-19 06:51:04.609754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.721 [2024-11-19 06:51:04.610707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4233.535 ms, result 0 00:27:12.721 { 00:27:12.721 "name": "ftl", 00:27:12.721 "uuid": "f6eed4e1-a0be-4bdb-b11b-a2b0a2e02fa9" 00:27:12.721 } 00:27:12.721 06:51:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:12.979 [2024-11-19 06:51:04.809973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.980 06:51:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:13.238 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:13.496 [2024-11-19 06:51:05.218294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:13.496 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:13.496 [2024-11-19 06:51:05.414559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:13.754 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:14.014 Fill FTL, iteration 1 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80240 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80240 /var/tmp/spdk.tgt.sock 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80240 ']' 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.014 06:51:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:14.014 [2024-11-19 06:51:05.822323] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:14.014 [2024-11-19 06:51:05.822439] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80240 ] 00:27:14.272 [2024-11-19 06:51:05.982405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.273 [2024-11-19 06:51:06.074996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.840 06:51:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.840 06:51:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:14.840 06:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:15.098 ftln1 00:27:15.098 06:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:15.098 06:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80240 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80240 ']' 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80240 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80240 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:15.358 killing process with pid 80240 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80240' 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80240 00:27:15.358 06:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80240 00:27:16.734 06:51:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:16.734 06:51:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:16.734 [2024-11-19 06:51:08.374670] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:16.734 [2024-11-19 06:51:08.374793] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80280 ] 00:27:16.734 [2024-11-19 06:51:08.533004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.734 [2024-11-19 06:51:08.607703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.108  [2024-11-19T06:51:10.970Z] Copying: 238/1024 [MB] (238 MBps) [2024-11-19T06:51:11.903Z] Copying: 484/1024 [MB] (246 MBps) [2024-11-19T06:51:13.272Z] Copying: 716/1024 [MB] (232 MBps) [2024-11-19T06:51:13.273Z] Copying: 946/1024 [MB] (230 MBps) [2024-11-19T06:51:13.841Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:27:21.912 00:27:21.912 Calculate MD5 checksum, iteration 1 00:27:21.912 06:51:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:21.912 06:51:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:21.912 06:51:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:21.912 06:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:21.912 06:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:21.912 06:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:21.912 06:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:21.912 06:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:22.171 [2024-11-19 06:51:13.900514] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:22.171 [2024-11-19 06:51:13.900638] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80343 ] 00:27:22.171 [2024-11-19 06:51:14.061061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.429 [2024-11-19 06:51:14.144239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.802  [2024-11-19T06:51:16.297Z] Copying: 648/1024 [MB] (648 MBps) [2024-11-19T06:51:16.557Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:27:24.628 00:27:24.628 06:51:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:24.628 06:51:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:27.170 Fill FTL, iteration 2 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9a0a008830c36e46d2b5525dc96ac527 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:27.170 06:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:27.170 [2024-11-19 06:51:18.810469] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:27.170 [2024-11-19 06:51:18.810578] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80400 ] 00:27:27.170 [2024-11-19 06:51:18.965530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.170 [2024-11-19 06:51:19.040129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.542  [2024-11-19T06:51:21.405Z] Copying: 242/1024 [MB] (242 MBps) [2024-11-19T06:51:22.339Z] Copying: 486/1024 [MB] (244 MBps) [2024-11-19T06:51:23.713Z] Copying: 737/1024 [MB] (251 MBps) [2024-11-19T06:51:23.713Z] Copying: 981/1024 [MB] (244 MBps) [2024-11-19T06:51:24.281Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:27:32.352 00:27:32.352 Calculate MD5 checksum, iteration 2 00:27:32.352 06:51:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:32.352 06:51:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:32.352 06:51:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:32.352 06:51:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:32.352 06:51:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:32.352 06:51:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:32.352 06:51:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:32.352 06:51:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:32.352 [2024-11-19 06:51:24.155950] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:32.352 [2024-11-19 06:51:24.156038] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80454 ] 00:27:32.613 [2024-11-19 06:51:24.297085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.613 [2024-11-19 06:51:24.374489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.986  [2024-11-19T06:51:26.482Z] Copying: 649/1024 [MB] (649 MBps) [2024-11-19T06:51:27.562Z] Copying: 1024/1024 [MB] (average 658 MBps) 00:27:35.633 00:27:35.633 06:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:35.633 06:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:37.541 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:37.541 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a801928e16f87cbc0d36fb5052ba7df7 00:27:37.541 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:37.541 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:37.541 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:37.541 [2024-11-19 06:51:29.359567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.541 [2024-11-19 06:51:29.359603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:37.541 [2024-11-19 06:51:29.359613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:37.541 [2024-11-19 06:51:29.359620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.541 [2024-11-19 06:51:29.359638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.541 [2024-11-19 06:51:29.359646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:37.541 [2024-11-19 06:51:29.359652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:37.541 [2024-11-19 06:51:29.359660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.541 [2024-11-19 06:51:29.359675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.541 [2024-11-19 06:51:29.359681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:37.541 [2024-11-19 06:51:29.359687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:37.541 [2024-11-19 06:51:29.359692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.541 [2024-11-19 06:51:29.359742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.173 ms, result 0 00:27:37.541 true 00:27:37.541 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:37.799 { 00:27:37.799 "name": "ftl", 00:27:37.799 "properties": [ 00:27:37.799 { 00:27:37.799 "name": "superblock_version", 00:27:37.799 "value": 5, 00:27:37.799 "read-only": true 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "name": "base_device", 00:27:37.799 "bands": [ 00:27:37.799 { 00:27:37.799 "id": 0, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 1, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 2, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 3, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 4, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 5, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 6, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 7, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 8, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 9, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 10, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 11, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 12, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 13, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 14, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 15, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 16, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 17, 00:27:37.799 "state": "FREE", 00:27:37.799 "validity": 0.0 00:27:37.799 } 00:27:37.799 ], 00:27:37.799 "read-only": true 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "name": "cache_device", 00:27:37.799 "type": "bdev", 00:27:37.799 "chunks": [ 00:27:37.799 { 00:27:37.799 "id": 0, 00:27:37.799 "state": "INACTIVE", 00:27:37.799 "utilization": 0.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 1, 00:27:37.799 "state": "CLOSED", 00:27:37.799 "utilization": 1.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 2, 00:27:37.799 "state": "CLOSED", 00:27:37.799 "utilization": 1.0 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 3, 00:27:37.799 "state": "OPEN", 00:27:37.799 "utilization": 0.001953125 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "id": 4, 00:27:37.799 "state": "OPEN", 00:27:37.799 "utilization": 0.0 00:27:37.799 } 00:27:37.799 ], 00:27:37.799 "read-only": true 00:27:37.799 }, 00:27:37.799 { 00:27:37.799 "name": "verbose_mode", 00:27:37.799 "value": true, 00:27:37.799 "unit": "", 00:27:37.799 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:37.799 }, 00:27:37.800 { 00:27:37.800 "name": "prep_upgrade_on_shutdown", 00:27:37.800 "value": false, 00:27:37.800 "unit": "", 00:27:37.800 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:37.800 } 00:27:37.800 ] 00:27:37.800 } 00:27:37.800 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:38.058 [2024-11-19 06:51:29.759887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.058 [2024-11-19 06:51:29.759917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:38.058 [2024-11-19 06:51:29.759934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:38.058 [2024-11-19 06:51:29.759940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.058 [2024-11-19 06:51:29.759955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.058 [2024-11-19 06:51:29.759962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:38.058 [2024-11-19 06:51:29.759967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:38.058 [2024-11-19 06:51:29.759973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.058 [2024-11-19 06:51:29.759987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.058 [2024-11-19 06:51:29.759993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:38.058 [2024-11-19 06:51:29.759998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:38.058 [2024-11-19 06:51:29.760003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.058 [2024-11-19 06:51:29.760047] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.150 ms, result 0 00:27:38.058 true 00:27:38.058 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:38.058 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:38.058 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:38.058 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:38.058 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:38.058 06:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:38.316 [2024-11-19 06:51:30.120207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.316 [2024-11-19 06:51:30.120235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:38.316 [2024-11-19 06:51:30.120244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:38.316 [2024-11-19 06:51:30.120249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.316 [2024-11-19 06:51:30.120265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.316 [2024-11-19 06:51:30.120271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:38.316 [2024-11-19 06:51:30.120277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:38.316 [2024-11-19 06:51:30.120282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.316 [2024-11-19 06:51:30.120297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.316 [2024-11-19 06:51:30.120302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:38.316 [2024-11-19 06:51:30.120308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:38.316 [2024-11-19 06:51:30.120313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.316 [2024-11-19 06:51:30.120357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.138 ms, result 0 00:27:38.316 true 00:27:38.316 06:51:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:38.574 { 00:27:38.574 "name": "ftl", 00:27:38.574 "properties": [ 00:27:38.574 { 00:27:38.574 "name": "superblock_version", 00:27:38.574 "value": 5, 00:27:38.574 "read-only": true 00:27:38.574 }, 00:27:38.574 { 00:27:38.574 "name": "base_device", 00:27:38.574 "bands": [ 00:27:38.574 { 00:27:38.574 "id": 0, 00:27:38.574 "state": "FREE", 00:27:38.574 "validity": 0.0 00:27:38.574 }, 00:27:38.574 { 00:27:38.574 "id": 1, 00:27:38.574 "state": "FREE", 00:27:38.574 "validity": 0.0 00:27:38.574 }, 00:27:38.574 { 00:27:38.574 "id": 2, 00:27:38.574 "state": "FREE", 00:27:38.574 "validity": 0.0 00:27:38.574 }, 00:27:38.574 { 00:27:38.574 "id": 3, 00:27:38.574 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 4, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 5, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 6, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 7, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 8, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 9, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 10, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 11, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 12, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 13, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 14, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 15, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 16, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 17, 00:27:38.575 "state": "FREE", 00:27:38.575 "validity": 0.0 00:27:38.575 } 00:27:38.575 ], 00:27:38.575 "read-only": true 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "name": "cache_device", 00:27:38.575 "type": "bdev", 00:27:38.575 "chunks": [ 00:27:38.575 { 00:27:38.575 "id": 0, 00:27:38.575 "state": "INACTIVE", 00:27:38.575 "utilization": 0.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 1, 00:27:38.575 "state": "CLOSED", 00:27:38.575 "utilization": 1.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 2, 00:27:38.575 "state": "CLOSED", 00:27:38.575 "utilization": 1.0 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 3, 00:27:38.575 "state": "OPEN", 00:27:38.575 "utilization": 0.001953125 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "id": 4, 00:27:38.575 "state": "OPEN", 00:27:38.575 "utilization": 0.0 00:27:38.575 } 00:27:38.575 ], 00:27:38.575 "read-only": true 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "name": "verbose_mode", 00:27:38.575 "value": true, 00:27:38.575 "unit": "", 00:27:38.575 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:38.575 }, 00:27:38.575 { 00:27:38.575 "name": "prep_upgrade_on_shutdown", 00:27:38.575 "value": true, 00:27:38.575 "unit": "", 00:27:38.575 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:38.575 } 00:27:38.575 ] 00:27:38.575 } 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80111 ]] 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80111 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80111 ']' 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80111 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80111 00:27:38.575 killing process with pid 80111 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80111' 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80111 00:27:38.575 06:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80111 00:27:39.142 [2024-11-19 06:51:30.835070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:39.142 [2024-11-19 06:51:30.845211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.142 [2024-11-19 06:51:30.845243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:39.142 [2024-11-19 06:51:30.845253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:39.142 [2024-11-19 06:51:30.845259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.142 [2024-11-19 06:51:30.845277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:39.142 [2024-11-19 06:51:30.847404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.142 [2024-11-19 06:51:30.847426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:39.142 [2024-11-19 06:51:30.847434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.117 ms 00:27:39.142 [2024-11-19 06:51:30.847441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.266 [2024-11-19 06:51:37.908025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.266 [2024-11-19 06:51:37.908079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:47.266 [2024-11-19 06:51:37.908090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7060.540 ms 00:27:47.266 [2024-11-19 06:51:37.908097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.266 [2024-11-19 06:51:37.909409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.266 [2024-11-19 06:51:37.909432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:47.266 [2024-11-19 06:51:37.909440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.297 ms 00:27:47.266 [2024-11-19 06:51:37.909446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.266 [2024-11-19 06:51:37.910319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.266 [2024-11-19 06:51:37.910335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:47.266 [2024-11-19 06:51:37.910343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.852 ms 00:27:47.266 [2024-11-19 06:51:37.910349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.266 [2024-11-19 06:51:37.917999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.266 [2024-11-19 06:51:37.918025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:47.266 [2024-11-19 06:51:37.918032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.612 ms 00:27:47.266 [2024-11-19 06:51:37.918038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.266 [2024-11-19 06:51:37.923044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.266 [2024-11-19 06:51:37.923155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:47.266 [2024-11-19 06:51:37.923167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.981 ms 00:27:47.266 [2024-11-19 06:51:37.923173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.266 [2024-11-19 06:51:37.923235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.266 [2024-11-19 06:51:37.923243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:47.266 [2024-11-19 06:51:37.923250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:47.266 [2024-11-19 06:51:37.923259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.266 [2024-11-19 06:51:37.930131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.267 [2024-11-19 06:51:37.930319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:47.267 [2024-11-19 06:51:37.930331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.860 ms 00:27:47.267 [2024-11-19 06:51:37.930336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.937176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.267 [2024-11-19 06:51:37.937263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:47.267 [2024-11-19 06:51:37.937273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.817 ms 00:27:47.267 [2024-11-19 06:51:37.937278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.944043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.267 [2024-11-19 06:51:37.944126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:47.267 [2024-11-19 06:51:37.944137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.742 ms 00:27:47.267 [2024-11-19 06:51:37.944142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.950727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.267 [2024-11-19 06:51:37.950752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:47.267 [2024-11-19 06:51:37.950759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.542 ms 00:27:47.267 [2024-11-19 06:51:37.950764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.950787] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:47.267 [2024-11-19 06:51:37.950797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:47.267 [2024-11-19 06:51:37.950805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:47.267 [2024-11-19 06:51:37.950818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:47.267 [2024-11-19 06:51:37.950824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:47.267 [2024-11-19 06:51:37.950910] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:47.267 [2024-11-19 06:51:37.950916] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f6eed4e1-a0be-4bdb-b11b-a2b0a2e02fa9 00:27:47.267 [2024-11-19 06:51:37.950935] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:47.267 [2024-11-19 06:51:37.950941] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:47.267 [2024-11-19 06:51:37.950947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:47.267 [2024-11-19 06:51:37.950953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:47.267 [2024-11-19 06:51:37.950959] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:47.267 [2024-11-19 06:51:37.950964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:47.267 [2024-11-19 06:51:37.950972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:47.267 [2024-11-19 06:51:37.950977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:47.267 [2024-11-19 06:51:37.950983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:47.267 [2024-11-19 06:51:37.950989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.267 [2024-11-19 06:51:37.950998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:47.267 [2024-11-19 06:51:37.951007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:27:47.267 [2024-11-19 06:51:37.951013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.960418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.267 [2024-11-19 06:51:37.960441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:47.267 [2024-11-19 06:51:37.960449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.393 ms 00:27:47.267 [2024-11-19 06:51:37.960454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.960719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.267 [2024-11-19 06:51:37.960730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:47.267 [2024-11-19 06:51:37.960736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:27:47.267 [2024-11-19 06:51:37.960742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.993056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:37.993083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:47.267 [2024-11-19 06:51:37.993092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:37.993101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.993124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:37.993130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:47.267 [2024-11-19 06:51:37.993136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:37.993142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.993189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:37.993196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:47.267 [2024-11-19 06:51:37.993202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:37.993208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:37.993223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:37.993229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:47.267 [2024-11-19 06:51:37.993235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:37.993241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:38.051420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:38.051453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:47.267 [2024-11-19 06:51:38.051462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:38.051468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:38.099284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:38.099315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:47.267 [2024-11-19 06:51:38.099324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:38.099330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:38.099392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:38.099399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:47.267 [2024-11-19 06:51:38.099406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:38.099416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:38.099447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:38.099456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:47.267 [2024-11-19 06:51:38.099462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:38.099467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:38.099532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:38.099539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:47.267 [2024-11-19 06:51:38.099545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:38.099551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:38.099589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:38.099596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:47.267 [2024-11-19 06:51:38.099604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:38.099610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:38.099639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:38.099646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:47.267 [2024-11-19 06:51:38.099652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:38.099658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.267 [2024-11-19 06:51:38.099690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:47.267 [2024-11-19 06:51:38.099700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:47.267 [2024-11-19 06:51:38.099706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:47.267 [2024-11-19 06:51:38.099712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.268 [2024-11-19 06:51:38.099802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7254.547 ms, result 0 00:27:50.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80625 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80625 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80625 ']' 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.560 06:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:50.560 [2024-11-19 06:51:42.406615] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:50.560 [2024-11-19 06:51:42.406733] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80625 ] 00:27:50.820 [2024-11-19 06:51:42.564711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.820 [2024-11-19 06:51:42.647489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.388 [2024-11-19 06:51:43.217944] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:51.388 [2024-11-19 06:51:43.217993] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:51.651 [2024-11-19 06:51:43.365254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.365427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:51.651 [2024-11-19 06:51:43.365447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:51.651 [2024-11-19 06:51:43.365456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.365517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.365528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:51.651 [2024-11-19 06:51:43.365537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:51.651 [2024-11-19 06:51:43.365545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.365570] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:51.651 [2024-11-19 06:51:43.366288] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:51.651 [2024-11-19 06:51:43.366306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.366313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:51.651 [2024-11-19 06:51:43.366322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.744 ms 00:27:51.651 [2024-11-19 06:51:43.366330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.367535] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:51.651 [2024-11-19 06:51:43.380672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.380813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:51.651 [2024-11-19 06:51:43.380837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.138 ms 00:27:51.651 [2024-11-19 06:51:43.380846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.381163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.381196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:51.651 [2024-11-19 06:51:43.381209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:51.651 [2024-11-19 06:51:43.381218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.387262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.387423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:51.651 [2024-11-19 06:51:43.387440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.950 ms 00:27:51.651 [2024-11-19 06:51:43.387448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.387512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.387521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:51.651 [2024-11-19 06:51:43.387530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:51.651 [2024-11-19 06:51:43.387537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.387603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.387614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:51.651 [2024-11-19 06:51:43.387627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:51.651 [2024-11-19 06:51:43.387635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.387659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:51.651 [2024-11-19 06:51:43.391348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.391379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:51.651 [2024-11-19 06:51:43.391389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.694 ms 00:27:51.651 [2024-11-19 06:51:43.391400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.391425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.651 [2024-11-19 06:51:43.391433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:51.651 [2024-11-19 06:51:43.391441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:51.651 [2024-11-19 06:51:43.391448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.651 [2024-11-19 06:51:43.391486] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:51.651 [2024-11-19 06:51:43.391506] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:51.651 [2024-11-19 06:51:43.391543] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:51.651 [2024-11-19 06:51:43.391568] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:51.651 [2024-11-19 06:51:43.391672] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:51.651 [2024-11-19 06:51:43.391682] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:51.651 [2024-11-19 06:51:43.391693] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:51.651 [2024-11-19 06:51:43.391703] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:51.652 [2024-11-19 06:51:43.391711] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:51.652 [2024-11-19 06:51:43.391723] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:51.652 [2024-11-19 06:51:43.391730] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:51.652 [2024-11-19 06:51:43.391737] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:51.652 [2024-11-19 06:51:43.391745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:51.652 [2024-11-19 06:51:43.391752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.652 [2024-11-19 06:51:43.391760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:51.652 [2024-11-19 06:51:43.391767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:27:51.652 [2024-11-19 06:51:43.391774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.652 [2024-11-19 06:51:43.391874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.652 [2024-11-19 06:51:43.391883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:51.652 [2024-11-19 06:51:43.391890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:51.652 [2024-11-19 06:51:43.391900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.652 [2024-11-19 06:51:43.392022] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:51.652 [2024-11-19 06:51:43.392034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:51.652 [2024-11-19 06:51:43.392042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:51.652 [2024-11-19 06:51:43.392049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:51.652 [2024-11-19 06:51:43.392064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:51.652 [2024-11-19 06:51:43.392078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:51.652 [2024-11-19 06:51:43.392086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:51.652 [2024-11-19 06:51:43.392092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:51.652 [2024-11-19 06:51:43.392106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:51.652 [2024-11-19 06:51:43.392113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:51.652 [2024-11-19 06:51:43.392132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:51.652 [2024-11-19 06:51:43.392138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:51.652 [2024-11-19 06:51:43.392151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:51.652 [2024-11-19 06:51:43.392158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:51.652 [2024-11-19 06:51:43.392171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:51.652 [2024-11-19 06:51:43.392177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:51.652 [2024-11-19 06:51:43.392184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:51.652 [2024-11-19 06:51:43.392191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:51.652 [2024-11-19 06:51:43.392197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:51.652 [2024-11-19 06:51:43.392210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:51.652 [2024-11-19 06:51:43.392217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:51.652 [2024-11-19 06:51:43.392223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:51.652 [2024-11-19 06:51:43.392230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:51.652 [2024-11-19 06:51:43.392236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:51.652 [2024-11-19 06:51:43.392242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:51.652 [2024-11-19 06:51:43.392249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:51.652 [2024-11-19 06:51:43.392256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:51.652 [2024-11-19 06:51:43.392262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:51.652 [2024-11-19 06:51:43.392286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:51.652 [2024-11-19 06:51:43.392292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:51.652 [2024-11-19 06:51:43.392306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:51.652 [2024-11-19 06:51:43.392325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:51.652 [2024-11-19 06:51:43.392331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392337] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:51.652 [2024-11-19 06:51:43.392345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:51.652 [2024-11-19 06:51:43.392352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:51.652 [2024-11-19 06:51:43.392364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.652 [2024-11-19 06:51:43.392374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:51.652 [2024-11-19 06:51:43.392381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:51.652 [2024-11-19 06:51:43.392389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:51.652 [2024-11-19 06:51:43.392396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:51.652 [2024-11-19 06:51:43.392402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:51.652 [2024-11-19 06:51:43.392409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:51.652 [2024-11-19 06:51:43.392417] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:51.652 [2024-11-19 06:51:43.392426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.652 [2024-11-19 06:51:43.392434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:51.652 [2024-11-19 06:51:43.392441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:51.652 [2024-11-19 06:51:43.392448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:51.652 [2024-11-19 06:51:43.392455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:51.652 [2024-11-19 06:51:43.392462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:51.652 [2024-11-19 06:51:43.392469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:51.652 [2024-11-19 06:51:43.392476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:51.652 [2024-11-19 06:51:43.392482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:51.652 [2024-11-19 06:51:43.392489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:51.652 [2024-11-19 06:51:43.392496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:51.652 [2024-11-19 06:51:43.392502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:51.652 [2024-11-19 06:51:43.392509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:51.652 [2024-11-19 06:51:43.392515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:51.653 [2024-11-19 06:51:43.392523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:51.653 [2024-11-19 06:51:43.392529] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:51.653 [2024-11-19 06:51:43.392537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.653 [2024-11-19 06:51:43.392545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:51.653 [2024-11-19 06:51:43.392552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:51.653 [2024-11-19 06:51:43.392558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:51.653 [2024-11-19 06:51:43.392565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:51.653 [2024-11-19 06:51:43.392572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.653 [2024-11-19 06:51:43.392580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:51.653 [2024-11-19 06:51:43.392588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.620 ms 00:27:51.653 [2024-11-19 06:51:43.392597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.653 [2024-11-19 06:51:43.392638] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:51.653 [2024-11-19 06:51:43.392648] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:55.857 [2024-11-19 06:51:47.068652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.068737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:55.857 [2024-11-19 06:51:47.068756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3675.997 ms 00:27:55.857 [2024-11-19 06:51:47.068765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.100229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.100289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:55.857 [2024-11-19 06:51:47.100304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.184 ms 00:27:55.857 [2024-11-19 06:51:47.100313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.100421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.100439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:55.857 [2024-11-19 06:51:47.100450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:55.857 [2024-11-19 06:51:47.100458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.135404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.135661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:55.857 [2024-11-19 06:51:47.135683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.905 ms 00:27:55.857 [2024-11-19 06:51:47.135698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.135737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.135747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:55.857 [2024-11-19 06:51:47.135756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:55.857 [2024-11-19 06:51:47.135764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.136368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.136391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:55.857 [2024-11-19 06:51:47.136402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:27:55.857 [2024-11-19 06:51:47.136410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.136464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.136473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:55.857 [2024-11-19 06:51:47.136482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:55.857 [2024-11-19 06:51:47.136489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.153791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.153837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:55.857 [2024-11-19 06:51:47.153848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.277 ms 00:27:55.857 [2024-11-19 06:51:47.153856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.167956] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:55.857 [2024-11-19 06:51:47.168005] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:55.857 [2024-11-19 06:51:47.168019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.168028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:55.857 [2024-11-19 06:51:47.168037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.027 ms 00:27:55.857 [2024-11-19 06:51:47.168045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.182667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.182715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:55.857 [2024-11-19 06:51:47.182727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.570 ms 00:27:55.857 [2024-11-19 06:51:47.182735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.195112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.195156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:55.857 [2024-11-19 06:51:47.195168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.317 ms 00:27:55.857 [2024-11-19 06:51:47.195176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.207524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.207583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:55.857 [2024-11-19 06:51:47.207597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.301 ms 00:27:55.857 [2024-11-19 06:51:47.207604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.208275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.208310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:55.857 [2024-11-19 06:51:47.208321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.557 ms 00:27:55.857 [2024-11-19 06:51:47.208330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.283519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.283601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:55.857 [2024-11-19 06:51:47.283617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 75.166 ms 00:27:55.857 [2024-11-19 06:51:47.283626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.857 [2024-11-19 06:51:47.294803] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:55.857 [2024-11-19 06:51:47.295872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.857 [2024-11-19 06:51:47.295915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:55.857 [2024-11-19 06:51:47.295942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.185 ms 00:27:55.857 [2024-11-19 06:51:47.295951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.296065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.858 [2024-11-19 06:51:47.296080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:55.858 [2024-11-19 06:51:47.296091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:55.858 [2024-11-19 06:51:47.296099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.296162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.858 [2024-11-19 06:51:47.296173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:55.858 [2024-11-19 06:51:47.296182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:55.858 [2024-11-19 06:51:47.296191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.296215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.858 [2024-11-19 06:51:47.296224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:55.858 [2024-11-19 06:51:47.296233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:55.858 [2024-11-19 06:51:47.296245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.296282] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:55.858 [2024-11-19 06:51:47.296293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.858 [2024-11-19 06:51:47.296301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:55.858 [2024-11-19 06:51:47.296310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:55.858 [2024-11-19 06:51:47.296319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.321386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.858 [2024-11-19 06:51:47.321435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:55.858 [2024-11-19 06:51:47.321448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.046 ms 00:27:55.858 [2024-11-19 06:51:47.321457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.321545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.858 [2024-11-19 06:51:47.321555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:55.858 [2024-11-19 06:51:47.321566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:55.858 [2024-11-19 06:51:47.321574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.322829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3957.066 ms, result 0 00:27:55.858 [2024-11-19 06:51:47.337804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.858 [2024-11-19 06:51:47.353796] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:55.858 [2024-11-19 06:51:47.361978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:55.858 06:51:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.858 06:51:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:55.858 06:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:55.858 06:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:55.858 06:51:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:55.858 [2024-11-19 06:51:47.606014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.858 [2024-11-19 06:51:47.606064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:55.858 [2024-11-19 06:51:47.606078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:55.858 [2024-11-19 06:51:47.606089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.606114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.858 [2024-11-19 06:51:47.606124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:55.858 [2024-11-19 06:51:47.606132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:55.858 [2024-11-19 06:51:47.606140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.606161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.858 [2024-11-19 06:51:47.606169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:55.858 [2024-11-19 06:51:47.606179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:55.858 [2024-11-19 06:51:47.606187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.858 [2024-11-19 06:51:47.606248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.227 ms, result 0 00:27:55.858 true 00:27:55.858 06:51:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:56.120 { 00:27:56.120 "name": "ftl", 00:27:56.120 "properties": [ 00:27:56.120 { 00:27:56.120 "name": "superblock_version", 00:27:56.120 "value": 5, 00:27:56.120 "read-only": true 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "name": "base_device", 00:27:56.120 "bands": [ 00:27:56.120 { 00:27:56.120 "id": 0, 00:27:56.120 "state": "CLOSED", 00:27:56.120 "validity": 1.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 1, 00:27:56.120 "state": "CLOSED", 00:27:56.120 "validity": 1.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 2, 00:27:56.120 "state": "CLOSED", 00:27:56.120 "validity": 0.007843137254901933 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 3, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 4, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 5, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 6, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 7, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 8, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 9, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 10, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 11, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 12, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 13, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 14, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 15, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 16, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 17, 00:27:56.120 "state": "FREE", 00:27:56.120 "validity": 0.0 00:27:56.120 } 00:27:56.120 ], 00:27:56.120 "read-only": true 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "name": "cache_device", 00:27:56.120 "type": "bdev", 00:27:56.120 "chunks": [ 00:27:56.120 { 00:27:56.120 "id": 0, 00:27:56.120 "state": "INACTIVE", 00:27:56.120 "utilization": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 1, 00:27:56.120 "state": "OPEN", 00:27:56.120 "utilization": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 2, 00:27:56.120 "state": "OPEN", 00:27:56.120 "utilization": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 3, 00:27:56.120 "state": "FREE", 00:27:56.120 "utilization": 0.0 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "id": 4, 00:27:56.120 "state": "FREE", 00:27:56.120 "utilization": 0.0 00:27:56.120 } 00:27:56.120 ], 00:27:56.120 "read-only": true 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "name": "verbose_mode", 00:27:56.120 "value": true, 00:27:56.120 "unit": "", 00:27:56.120 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:56.120 }, 00:27:56.120 { 00:27:56.120 "name": "prep_upgrade_on_shutdown", 00:27:56.120 "value": false, 00:27:56.120 "unit": "", 00:27:56.120 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:56.120 } 00:27:56.120 ] 00:27:56.120 } 00:27:56.120 06:51:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:56.120 06:51:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:56.120 06:51:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:56.382 Validate MD5 checksum, iteration 1 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:56.382 06:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:56.643 [2024-11-19 06:51:48.373292] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:56.643 [2024-11-19 06:51:48.373669] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80706 ] 00:27:56.643 [2024-11-19 06:51:48.538017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.904 [2024-11-19 06:51:48.659880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.291  [2024-11-19T06:51:51.604Z] Copying: 488/1024 [MB] (488 MBps) [2024-11-19T06:51:52.545Z] Copying: 1024/1024 [MB] (average 515 MBps) 00:28:00.616 00:28:00.616 06:51:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:00.616 06:51:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9a0a008830c36e46d2b5525dc96ac527 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9a0a008830c36e46d2b5525dc96ac527 != \9\a\0\a\0\0\8\8\3\0\c\3\6\e\4\6\d\2\b\5\5\2\5\d\c\9\6\a\c\5\2\7 ]] 00:28:03.156 Validate MD5 checksum, iteration 2 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:03.156 06:51:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:03.156 [2024-11-19 06:51:54.541290] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:03.156 [2024-11-19 06:51:54.541528] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80773 ] 00:28:03.156 [2024-11-19 06:51:54.701621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.156 [2024-11-19 06:51:54.795456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.540  [2024-11-19T06:51:57.042Z] Copying: 552/1024 [MB] (552 MBps) [2024-11-19T06:52:00.339Z] Copying: 1024/1024 [MB] (average 601 MBps) 00:28:08.410 00:28:08.410 06:52:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:08.410 06:52:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.954 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.954 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a801928e16f87cbc0d36fb5052ba7df7 00:28:10.954 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a801928e16f87cbc0d36fb5052ba7df7 != \a\8\0\1\9\2\8\e\1\6\f\8\7\c\b\c\0\d\3\6\f\b\5\0\5\2\b\a\7\d\f\7 ]] 00:28:10.954 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.954 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80625 ]] 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80625 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:10.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80862 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80862 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80862 ']' 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:10.955 06:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:10.955 [2024-11-19 06:52:02.334731] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:10.955 [2024-11-19 06:52:02.335009] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80862 ] 00:28:10.955 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80625 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:10.955 [2024-11-19 06:52:02.480858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.955 [2024-11-19 06:52:02.570223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.527 [2024-11-19 06:52:03.198109] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:11.527 [2024-11-19 06:52:03.198164] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:11.527 [2024-11-19 06:52:03.346805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.527 [2024-11-19 06:52:03.346842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:11.527 [2024-11-19 06:52:03.346853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:11.527 [2024-11-19 06:52:03.346860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.346901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.346909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:11.528 [2024-11-19 06:52:03.346916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:11.528 [2024-11-19 06:52:03.346935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.346955] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:11.528 [2024-11-19 06:52:03.347502] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:11.528 [2024-11-19 06:52:03.347517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.347523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:11.528 [2024-11-19 06:52:03.347532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:28:11.528 [2024-11-19 06:52:03.347538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.347768] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:11.528 [2024-11-19 06:52:03.361344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.361476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:11.528 [2024-11-19 06:52:03.361491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.574 ms 00:28:11.528 [2024-11-19 06:52:03.361498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.368399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.368497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:11.528 [2024-11-19 06:52:03.368513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:11.528 [2024-11-19 06:52:03.368519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.368820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.368830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:11.528 [2024-11-19 06:52:03.368837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.236 ms 00:28:11.528 [2024-11-19 06:52:03.368843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.368886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.368894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:11.528 [2024-11-19 06:52:03.368900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:11.528 [2024-11-19 06:52:03.368905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.368938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.368946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:11.528 [2024-11-19 06:52:03.368953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:11.528 [2024-11-19 06:52:03.368959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.368975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:11.528 [2024-11-19 06:52:03.371266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.371297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:11.528 [2024-11-19 06:52:03.371305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.294 ms 00:28:11.528 [2024-11-19 06:52:03.371318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.371346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.371356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:11.528 [2024-11-19 06:52:03.371363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:11.528 [2024-11-19 06:52:03.371368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.371384] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:11.528 [2024-11-19 06:52:03.371401] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:11.528 [2024-11-19 06:52:03.371429] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:11.528 [2024-11-19 06:52:03.371444] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:11.528 [2024-11-19 06:52:03.371527] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:11.528 [2024-11-19 06:52:03.371536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:11.528 [2024-11-19 06:52:03.371544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:11.528 [2024-11-19 06:52:03.371552] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:11.528 [2024-11-19 06:52:03.371559] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:11.528 [2024-11-19 06:52:03.371574] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:11.528 [2024-11-19 06:52:03.371580] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:11.528 [2024-11-19 06:52:03.371586] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:11.528 [2024-11-19 06:52:03.371592] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:11.528 [2024-11-19 06:52:03.371601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.371607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:11.528 [2024-11-19 06:52:03.371613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:28:11.528 [2024-11-19 06:52:03.371620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.371686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.528 [2024-11-19 06:52:03.371692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:11.528 [2024-11-19 06:52:03.371698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:11.528 [2024-11-19 06:52:03.371704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.528 [2024-11-19 06:52:03.371780] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:11.528 [2024-11-19 06:52:03.371791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:11.528 [2024-11-19 06:52:03.371797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:11.528 [2024-11-19 06:52:03.371803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.371809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:11.528 [2024-11-19 06:52:03.371814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.371820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:11.528 [2024-11-19 06:52:03.371825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:11.528 [2024-11-19 06:52:03.371832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:11.528 [2024-11-19 06:52:03.371837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.371843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:11.528 [2024-11-19 06:52:03.371850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:11.528 [2024-11-19 06:52:03.371856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.371861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:11.528 [2024-11-19 06:52:03.371866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:11.528 [2024-11-19 06:52:03.371871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.371877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:11.528 [2024-11-19 06:52:03.371882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:11.528 [2024-11-19 06:52:03.371887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.371892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:11.528 [2024-11-19 06:52:03.371897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:11.528 [2024-11-19 06:52:03.371902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.528 [2024-11-19 06:52:03.371907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:11.528 [2024-11-19 06:52:03.371918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:11.528 [2024-11-19 06:52:03.372051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.528 [2024-11-19 06:52:03.372078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:11.528 [2024-11-19 06:52:03.372093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:11.528 [2024-11-19 06:52:03.372107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.528 [2024-11-19 06:52:03.372121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:11.528 [2024-11-19 06:52:03.372135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:11.528 [2024-11-19 06:52:03.372149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.528 [2024-11-19 06:52:03.372162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:11.528 [2024-11-19 06:52:03.372176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:11.528 [2024-11-19 06:52:03.372189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.372203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:11.528 [2024-11-19 06:52:03.372216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:11.528 [2024-11-19 06:52:03.372230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.372243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:11.528 [2024-11-19 06:52:03.372257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:11.528 [2024-11-19 06:52:03.372270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.372284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:11.528 [2024-11-19 06:52:03.372297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:11.528 [2024-11-19 06:52:03.372311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.372325] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:11.528 [2024-11-19 06:52:03.372340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:11.528 [2024-11-19 06:52:03.372405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:11.528 [2024-11-19 06:52:03.372423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.528 [2024-11-19 06:52:03.372438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:11.528 [2024-11-19 06:52:03.372453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:11.528 [2024-11-19 06:52:03.372467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:11.528 [2024-11-19 06:52:03.372481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:11.528 [2024-11-19 06:52:03.372495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:11.528 [2024-11-19 06:52:03.372509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:11.529 [2024-11-19 06:52:03.372524] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:11.529 [2024-11-19 06:52:03.372547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:11.529 [2024-11-19 06:52:03.372627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:11.529 [2024-11-19 06:52:03.372693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:11.529 [2024-11-19 06:52:03.372715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:11.529 [2024-11-19 06:52:03.372777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:11.529 [2024-11-19 06:52:03.372802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:11.529 [2024-11-19 06:52:03.372975] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:11.529 [2024-11-19 06:52:03.372981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:11.529 [2024-11-19 06:52:03.372997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:11.529 [2024-11-19 06:52:03.373003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:11.529 [2024-11-19 06:52:03.373008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:11.529 [2024-11-19 06:52:03.373017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.529 [2024-11-19 06:52:03.373023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:11.529 [2024-11-19 06:52:03.373029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.291 ms 00:28:11.529 [2024-11-19 06:52:03.373037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.529 [2024-11-19 06:52:03.394366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.529 [2024-11-19 06:52:03.394393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:11.529 [2024-11-19 06:52:03.394401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.272 ms 00:28:11.529 [2024-11-19 06:52:03.394407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.529 [2024-11-19 06:52:03.394437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.529 [2024-11-19 06:52:03.394443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:11.529 [2024-11-19 06:52:03.394449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:11.529 [2024-11-19 06:52:03.394455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.529 [2024-11-19 06:52:03.421046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.529 [2024-11-19 06:52:03.421073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:11.529 [2024-11-19 06:52:03.421082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.552 ms 00:28:11.529 [2024-11-19 06:52:03.421088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.529 [2024-11-19 06:52:03.421111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.529 [2024-11-19 06:52:03.421118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:11.529 [2024-11-19 06:52:03.421124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:11.529 [2024-11-19 06:52:03.421132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.529 [2024-11-19 06:52:03.421206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.529 [2024-11-19 06:52:03.421214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:11.529 [2024-11-19 06:52:03.421221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:11.529 [2024-11-19 06:52:03.421228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.529 [2024-11-19 06:52:03.421261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.529 [2024-11-19 06:52:03.421268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:11.529 [2024-11-19 06:52:03.421274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:11.529 [2024-11-19 06:52:03.421279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.529 [2024-11-19 06:52:03.434453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.529 [2024-11-19 06:52:03.434479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:11.529 [2024-11-19 06:52:03.434487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.153 ms 00:28:11.529 [2024-11-19 06:52:03.434495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.529 [2024-11-19 06:52:03.434574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.529 [2024-11-19 06:52:03.434583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:11.529 [2024-11-19 06:52:03.434590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:11.529 [2024-11-19 06:52:03.434596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.791 [2024-11-19 06:52:03.462865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.791 [2024-11-19 06:52:03.462912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:11.791 [2024-11-19 06:52:03.462933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.253 ms 00:28:11.791 [2024-11-19 06:52:03.462941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.791 [2024-11-19 06:52:03.470293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.791 [2024-11-19 06:52:03.470438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:11.791 [2024-11-19 06:52:03.470452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.412 ms 00:28:11.791 [2024-11-19 06:52:03.470459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.791 [2024-11-19 06:52:03.517940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.791 [2024-11-19 06:52:03.518048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:11.791 [2024-11-19 06:52:03.518092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.434 ms 00:28:11.791 [2024-11-19 06:52:03.518111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.791 [2024-11-19 06:52:03.518238] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:11.791 [2024-11-19 06:52:03.518359] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:11.791 [2024-11-19 06:52:03.518476] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:11.791 [2024-11-19 06:52:03.518589] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:11.791 [2024-11-19 06:52:03.518613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.792 [2024-11-19 06:52:03.518629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:11.792 [2024-11-19 06:52:03.518670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.467 ms 00:28:11.792 [2024-11-19 06:52:03.518688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.792 [2024-11-19 06:52:03.518740] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:11.792 [2024-11-19 06:52:03.518769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.792 [2024-11-19 06:52:03.518816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:11.792 [2024-11-19 06:52:03.518835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:11.792 [2024-11-19 06:52:03.518850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.792 [2024-11-19 06:52:03.531414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.792 [2024-11-19 06:52:03.531513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:11.792 [2024-11-19 06:52:03.531554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.537 ms 00:28:11.792 [2024-11-19 06:52:03.531586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.792 [2024-11-19 06:52:03.538036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.792 [2024-11-19 06:52:03.538116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:11.792 [2024-11-19 06:52:03.538155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:11.792 [2024-11-19 06:52:03.538174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.792 [2024-11-19 06:52:03.538249] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:11.792 [2024-11-19 06:52:03.538427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.792 [2024-11-19 06:52:03.538463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:11.792 [2024-11-19 06:52:03.538480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:28:11.792 [2024-11-19 06:52:03.538494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.365 [2024-11-19 06:52:04.219763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.365 [2024-11-19 06:52:04.220032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:12.365 [2024-11-19 06:52:04.220432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 680.585 ms 00:28:12.365 [2024-11-19 06:52:04.220492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.365 [2024-11-19 06:52:04.225645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.365 [2024-11-19 06:52:04.225830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:12.365 [2024-11-19 06:52:04.226067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.808 ms 00:28:12.365 [2024-11-19 06:52:04.226125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.365 [2024-11-19 06:52:04.227480] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:12.365 [2024-11-19 06:52:04.227753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.365 [2024-11-19 06:52:04.227832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:12.365 [2024-11-19 06:52:04.227861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.542 ms 00:28:12.365 [2024-11-19 06:52:04.228289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.365 [2024-11-19 06:52:04.228418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.365 [2024-11-19 06:52:04.228451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:12.365 [2024-11-19 06:52:04.228552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:12.365 [2024-11-19 06:52:04.228593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.365 [2024-11-19 06:52:04.228688] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 690.424 ms, result 0 00:28:12.365 [2024-11-19 06:52:04.228849] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:12.365 [2024-11-19 06:52:04.229212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.365 [2024-11-19 06:52:04.229266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:12.365 [2024-11-19 06:52:04.229289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:28:12.365 [2024-11-19 06:52:04.229407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.043980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.044174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:13.310 [2024-11-19 06:52:05.044246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 813.458 ms 00:28:13.310 [2024-11-19 06:52:05.044273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.048887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.049084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:13.310 [2024-11-19 06:52:05.049220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.657 ms 00:28:13.310 [2024-11-19 06:52:05.049248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.050098] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:13.310 [2024-11-19 06:52:05.050303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.050337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:13.310 [2024-11-19 06:52:05.050405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.004 ms 00:28:13.310 [2024-11-19 06:52:05.050428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.050482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.050508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:13.310 [2024-11-19 06:52:05.050528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:13.310 [2024-11-19 06:52:05.050598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.050649] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 821.794 ms, result 0 00:28:13.310 [2024-11-19 06:52:05.050693] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:13.310 [2024-11-19 06:52:05.050707] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:13.310 [2024-11-19 06:52:05.050718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.050728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:13.310 [2024-11-19 06:52:05.050738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1512.480 ms 00:28:13.310 [2024-11-19 06:52:05.050746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.050777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.050793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:13.310 [2024-11-19 06:52:05.050802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:13.310 [2024-11-19 06:52:05.050810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.061960] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:13.310 [2024-11-19 06:52:05.062093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.062105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:13.310 [2024-11-19 06:52:05.062115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.266 ms 00:28:13.310 [2024-11-19 06:52:05.062124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.062868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.062907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:13.310 [2024-11-19 06:52:05.062917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.642 ms 00:28:13.310 [2024-11-19 06:52:05.062947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.065226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.065408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:13.310 [2024-11-19 06:52:05.065425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.260 ms 00:28:13.310 [2024-11-19 06:52:05.065433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.065482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.065495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:13.310 [2024-11-19 06:52:05.065511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:13.310 [2024-11-19 06:52:05.065520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.065635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.065647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:13.310 [2024-11-19 06:52:05.065657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:13.310 [2024-11-19 06:52:05.065667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.310 [2024-11-19 06:52:05.065689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.310 [2024-11-19 06:52:05.065699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:13.310 [2024-11-19 06:52:05.065708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:13.311 [2024-11-19 06:52:05.065716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.311 [2024-11-19 06:52:05.065757] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:13.311 [2024-11-19 06:52:05.065769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.311 [2024-11-19 06:52:05.065777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:13.311 [2024-11-19 06:52:05.065785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:13.311 [2024-11-19 06:52:05.065795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.311 [2024-11-19 06:52:05.065853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.311 [2024-11-19 06:52:05.065863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:13.311 [2024-11-19 06:52:05.065872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:13.311 [2024-11-19 06:52:05.065883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.311 [2024-11-19 06:52:05.067461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1720.049 ms, result 0 00:28:13.311 [2024-11-19 06:52:05.082603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.311 [2024-11-19 06:52:05.098622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:13.311 [2024-11-19 06:52:05.107352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:13.311 Validate MD5 checksum, iteration 1 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:13.311 06:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:13.311 [2024-11-19 06:52:05.223686] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:13.311 [2024-11-19 06:52:05.224170] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80891 ] 00:28:13.573 [2024-11-19 06:52:05.390049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.833 [2024-11-19 06:52:05.509795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.221  [2024-11-19T06:52:08.138Z] Copying: 568/1024 [MB] (568 MBps) [2024-11-19T06:52:10.689Z] Copying: 1024/1024 [MB] (average 574 MBps) 00:28:18.760 00:28:18.760 06:52:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:18.760 06:52:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9a0a008830c36e46d2b5525dc96ac527 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9a0a008830c36e46d2b5525dc96ac527 != \9\a\0\a\0\0\8\8\3\0\c\3\6\e\4\6\d\2\b\5\5\2\5\d\c\9\6\a\c\5\2\7 ]] 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:20.671 Validate MD5 checksum, iteration 2 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:20.671 06:52:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:20.671 [2024-11-19 06:52:12.358003] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:20.672 [2024-11-19 06:52:12.358237] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80969 ] 00:28:20.672 [2024-11-19 06:52:12.517940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.932 [2024-11-19 06:52:12.613558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.317  [2024-11-19T06:52:15.187Z] Copying: 556/1024 [MB] (556 MBps) [2024-11-19T06:52:16.566Z] Copying: 1024/1024 [MB] (average 578 MBps) 00:28:24.637 00:28:24.637 06:52:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:24.637 06:52:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a801928e16f87cbc0d36fb5052ba7df7 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a801928e16f87cbc0d36fb5052ba7df7 != \a\8\0\1\9\2\8\e\1\6\f\8\7\c\b\c\0\d\3\6\f\b\5\0\5\2\b\a\7\d\f\7 ]] 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80862 ]] 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80862 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80862 ']' 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80862 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80862 00:28:27.184 killing process with pid 80862 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80862' 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80862 00:28:27.184 06:52:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80862 00:28:27.446 [2024-11-19 06:52:19.283959] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:27.446 [2024-11-19 06:52:19.297269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.297303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:27.446 [2024-11-19 06:52:19.297315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:27.446 [2024-11-19 06:52:19.297322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.297341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:27.446 [2024-11-19 06:52:19.299561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.299600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:27.446 [2024-11-19 06:52:19.299613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.209 ms 00:28:27.446 [2024-11-19 06:52:19.299619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.299792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.299801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:27.446 [2024-11-19 06:52:19.299807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.155 ms 00:28:27.446 [2024-11-19 06:52:19.299813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.301313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.301337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:27.446 [2024-11-19 06:52:19.301345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.489 ms 00:28:27.446 [2024-11-19 06:52:19.301356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.302266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.302284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:27.446 [2024-11-19 06:52:19.302292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.885 ms 00:28:27.446 [2024-11-19 06:52:19.302298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.309693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.309719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:27.446 [2024-11-19 06:52:19.309727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.371 ms 00:28:27.446 [2024-11-19 06:52:19.309738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.313946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.313970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:27.446 [2024-11-19 06:52:19.313979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.181 ms 00:28:27.446 [2024-11-19 06:52:19.313986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.314056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.314065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:27.446 [2024-11-19 06:52:19.314072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:27.446 [2024-11-19 06:52:19.314082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.322186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.322211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:27.446 [2024-11-19 06:52:19.322218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.091 ms 00:28:27.446 [2024-11-19 06:52:19.322224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.329817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.329841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:27.446 [2024-11-19 06:52:19.329848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.568 ms 00:28:27.446 [2024-11-19 06:52:19.329854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.337619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.337643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:27.446 [2024-11-19 06:52:19.337650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.740 ms 00:28:27.446 [2024-11-19 06:52:19.337656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.345522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.446 [2024-11-19 06:52:19.345546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:27.446 [2024-11-19 06:52:19.345553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.819 ms 00:28:27.446 [2024-11-19 06:52:19.345559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.446 [2024-11-19 06:52:19.345584] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:27.446 [2024-11-19 06:52:19.345596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:27.446 [2024-11-19 06:52:19.345604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:27.446 [2024-11-19 06:52:19.345610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:27.446 [2024-11-19 06:52:19.345616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:27.446 [2024-11-19 06:52:19.345623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:27.446 [2024-11-19 06:52:19.345628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:27.446 [2024-11-19 06:52:19.345634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:27.446 [2024-11-19 06:52:19.345640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:27.446 [2024-11-19 06:52:19.345645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:27.446 [2024-11-19 06:52:19.345651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:27.447 [2024-11-19 06:52:19.345657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:27.447 [2024-11-19 06:52:19.345662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:27.447 [2024-11-19 06:52:19.345668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:27.447 [2024-11-19 06:52:19.345674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:27.447 [2024-11-19 06:52:19.345680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:27.447 [2024-11-19 06:52:19.345685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:27.447 [2024-11-19 06:52:19.345691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:27.447 [2024-11-19 06:52:19.345696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:27.447 [2024-11-19 06:52:19.345704] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:27.447 [2024-11-19 06:52:19.345709] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f6eed4e1-a0be-4bdb-b11b-a2b0a2e02fa9 00:28:27.447 [2024-11-19 06:52:19.345715] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:27.447 [2024-11-19 06:52:19.345721] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:27.447 [2024-11-19 06:52:19.345727] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:27.447 [2024-11-19 06:52:19.345734] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:27.447 [2024-11-19 06:52:19.345739] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:27.447 [2024-11-19 06:52:19.345745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:27.447 [2024-11-19 06:52:19.345754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:27.447 [2024-11-19 06:52:19.345759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:27.447 [2024-11-19 06:52:19.345765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:27.447 [2024-11-19 06:52:19.345772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.447 [2024-11-19 06:52:19.345778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:27.447 [2024-11-19 06:52:19.345785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:28:27.447 [2024-11-19 06:52:19.345791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.447 [2024-11-19 06:52:19.355832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.447 [2024-11-19 06:52:19.355856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:27.447 [2024-11-19 06:52:19.355865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.019 ms 00:28:27.447 [2024-11-19 06:52:19.355872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.447 [2024-11-19 06:52:19.356192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.447 [2024-11-19 06:52:19.356200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:27.447 [2024-11-19 06:52:19.356208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:28:27.447 [2024-11-19 06:52:19.356215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.707 [2024-11-19 06:52:19.391204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.707 [2024-11-19 06:52:19.391384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:27.707 [2024-11-19 06:52:19.391398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.707 [2024-11-19 06:52:19.391404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.707 [2024-11-19 06:52:19.391434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.707 [2024-11-19 06:52:19.391440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:27.707 [2024-11-19 06:52:19.391447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.707 [2024-11-19 06:52:19.391453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.707 [2024-11-19 06:52:19.391527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.707 [2024-11-19 06:52:19.391535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:27.707 [2024-11-19 06:52:19.391543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.707 [2024-11-19 06:52:19.391550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.707 [2024-11-19 06:52:19.391575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.707 [2024-11-19 06:52:19.391582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:27.707 [2024-11-19 06:52:19.391589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.707 [2024-11-19 06:52:19.391596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.707 [2024-11-19 06:52:19.455278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.707 [2024-11-19 06:52:19.455312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:27.707 [2024-11-19 06:52:19.455321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.707 [2024-11-19 06:52:19.455327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.707 [2024-11-19 06:52:19.506959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.707 [2024-11-19 06:52:19.506992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:27.707 [2024-11-19 06:52:19.507002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.707 [2024-11-19 06:52:19.507009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.707 [2024-11-19 06:52:19.507070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.708 [2024-11-19 06:52:19.507078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:27.708 [2024-11-19 06:52:19.507085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.708 [2024-11-19 06:52:19.507091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.708 [2024-11-19 06:52:19.507142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.708 [2024-11-19 06:52:19.507154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:27.708 [2024-11-19 06:52:19.507161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.708 [2024-11-19 06:52:19.507175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.708 [2024-11-19 06:52:19.507251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.708 [2024-11-19 06:52:19.507258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:27.708 [2024-11-19 06:52:19.507265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.708 [2024-11-19 06:52:19.507271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.708 [2024-11-19 06:52:19.507300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.708 [2024-11-19 06:52:19.507309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:27.708 [2024-11-19 06:52:19.507318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.708 [2024-11-19 06:52:19.507324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.708 [2024-11-19 06:52:19.507359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.708 [2024-11-19 06:52:19.507366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:27.708 [2024-11-19 06:52:19.507373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.708 [2024-11-19 06:52:19.507379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.708 [2024-11-19 06:52:19.507417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.708 [2024-11-19 06:52:19.507427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:27.708 [2024-11-19 06:52:19.507434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.708 [2024-11-19 06:52:19.507440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.708 [2024-11-19 06:52:19.507548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 210.250 ms, result 0 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:28.646 Remove shared memory files 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80625 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:28.646 ************************************ 00:28:28.646 END TEST ftl_upgrade_shutdown 00:28:28.646 ************************************ 00:28:28.646 00:28:28.646 real 1m23.243s 00:28:28.646 user 1m53.557s 00:28:28.646 sys 0m19.552s 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.646 06:52:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:28.646 06:52:20 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:28.646 06:52:20 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:28.646 06:52:20 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:28.646 06:52:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.646 06:52:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:28.646 ************************************ 00:28:28.646 START TEST ftl_restore_fast 00:28:28.646 ************************************ 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:28.646 * Looking for test storage... 00:28:28.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:28.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.646 --rc genhtml_branch_coverage=1 00:28:28.646 --rc genhtml_function_coverage=1 00:28:28.646 --rc genhtml_legend=1 00:28:28.646 --rc geninfo_all_blocks=1 00:28:28.646 --rc geninfo_unexecuted_blocks=1 00:28:28.646 00:28:28.646 ' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:28.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.646 --rc genhtml_branch_coverage=1 00:28:28.646 --rc genhtml_function_coverage=1 00:28:28.646 --rc genhtml_legend=1 00:28:28.646 --rc geninfo_all_blocks=1 00:28:28.646 --rc geninfo_unexecuted_blocks=1 00:28:28.646 00:28:28.646 ' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:28.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.646 --rc genhtml_branch_coverage=1 00:28:28.646 --rc genhtml_function_coverage=1 00:28:28.646 --rc genhtml_legend=1 00:28:28.646 --rc geninfo_all_blocks=1 00:28:28.646 --rc geninfo_unexecuted_blocks=1 00:28:28.646 00:28:28.646 ' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:28.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.646 --rc genhtml_branch_coverage=1 00:28:28.646 --rc genhtml_function_coverage=1 00:28:28.646 --rc genhtml_legend=1 00:28:28.646 --rc geninfo_all_blocks=1 00:28:28.646 --rc geninfo_unexecuted_blocks=1 00:28:28.646 00:28:28.646 ' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.NCpCQCihuU 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:28.646 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81130 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81130 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 81130 ']' 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:28.647 06:52:20 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.905 [2024-11-19 06:52:20.621654] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:28.905 [2024-11-19 06:52:20.621763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81130 ] 00:28:28.905 [2024-11-19 06:52:20.775699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.164 [2024-11-19 06:52:20.852069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.730 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.730 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:28:29.730 06:52:21 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:29.730 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:29.730 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:29.730 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:29.730 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:29.730 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:29.989 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:29.989 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:29.989 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:29.989 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:29.989 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:29.989 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:29.989 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:29.989 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:29.989 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:29.989 { 00:28:29.989 "name": "nvme0n1", 00:28:29.989 "aliases": [ 00:28:29.989 "ad9a256a-3595-4d4b-9f89-75ed1767de59" 00:28:29.989 ], 00:28:29.989 "product_name": "NVMe disk", 00:28:29.989 "block_size": 4096, 00:28:29.989 "num_blocks": 1310720, 00:28:29.989 "uuid": "ad9a256a-3595-4d4b-9f89-75ed1767de59", 00:28:29.989 "numa_id": -1, 00:28:29.989 "assigned_rate_limits": { 00:28:29.989 "rw_ios_per_sec": 0, 00:28:29.989 "rw_mbytes_per_sec": 0, 00:28:29.989 "r_mbytes_per_sec": 0, 00:28:29.989 "w_mbytes_per_sec": 0 00:28:29.989 }, 00:28:29.989 "claimed": true, 00:28:29.989 "claim_type": "read_many_write_one", 00:28:29.989 "zoned": false, 00:28:29.989 "supported_io_types": { 00:28:29.989 "read": true, 00:28:29.989 "write": true, 00:28:29.989 "unmap": true, 00:28:29.990 "flush": true, 00:28:29.990 "reset": true, 00:28:29.990 "nvme_admin": true, 00:28:29.990 "nvme_io": true, 00:28:29.990 "nvme_io_md": false, 00:28:29.990 "write_zeroes": true, 00:28:29.990 "zcopy": false, 00:28:29.990 "get_zone_info": false, 00:28:29.990 "zone_management": false, 00:28:29.990 "zone_append": false, 00:28:29.990 "compare": true, 00:28:29.990 "compare_and_write": false, 00:28:29.990 "abort": true, 00:28:29.990 "seek_hole": false, 00:28:29.990 "seek_data": false, 00:28:29.990 "copy": true, 00:28:29.990 "nvme_iov_md": false 00:28:29.990 }, 00:28:29.990 "driver_specific": { 00:28:29.990 "nvme": [ 00:28:29.990 { 00:28:29.990 "pci_address": "0000:00:11.0", 00:28:29.990 "trid": { 00:28:29.990 "trtype": "PCIe", 00:28:29.990 "traddr": "0000:00:11.0" 00:28:29.990 }, 00:28:29.990 "ctrlr_data": { 00:28:29.990 "cntlid": 0, 00:28:29.990 "vendor_id": "0x1b36", 00:28:29.990 "model_number": "QEMU NVMe Ctrl", 00:28:29.990 "serial_number": "12341", 00:28:29.990 "firmware_revision": "8.0.0", 00:28:29.990 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:29.990 "oacs": { 00:28:29.990 "security": 0, 00:28:29.990 "format": 1, 00:28:29.990 "firmware": 0, 00:28:29.990 "ns_manage": 1 00:28:29.990 }, 00:28:29.990 "multi_ctrlr": false, 00:28:29.990 "ana_reporting": false 00:28:29.990 }, 00:28:29.990 "vs": { 00:28:29.990 "nvme_version": "1.4" 00:28:29.990 }, 00:28:29.990 "ns_data": { 00:28:29.990 "id": 1, 00:28:29.990 "can_share": false 00:28:29.990 } 00:28:29.990 } 00:28:29.990 ], 00:28:29.990 "mp_policy": "active_passive" 00:28:29.990 } 00:28:29.990 } 00:28:29.990 ]' 00:28:29.990 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:29.990 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:29.990 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:30.248 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:30.248 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:30.248 06:52:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:28:30.248 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:30.248 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:30.248 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:30.248 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:30.248 06:52:21 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:30.248 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=89f83c47-59dd-4b73-8813-7b6ec9d7a61a 00:28:30.248 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:30.248 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 89f83c47-59dd-4b73-8813-7b6ec9d7a61a 00:28:30.507 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:30.765 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=053e59cb-76fe-4fdc-a7f2-6abf229e5e5b 00:28:30.765 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 053e59cb-76fe-4fdc-a7f2-6abf229e5e5b 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:31.024 { 00:28:31.024 "name": "e8e0549d-bd44-46ce-8edc-96d3a6eecfb2", 00:28:31.024 "aliases": [ 00:28:31.024 "lvs/nvme0n1p0" 00:28:31.024 ], 00:28:31.024 "product_name": "Logical Volume", 00:28:31.024 "block_size": 4096, 00:28:31.024 "num_blocks": 26476544, 00:28:31.024 "uuid": "e8e0549d-bd44-46ce-8edc-96d3a6eecfb2", 00:28:31.024 "assigned_rate_limits": { 00:28:31.024 "rw_ios_per_sec": 0, 00:28:31.024 "rw_mbytes_per_sec": 0, 00:28:31.024 "r_mbytes_per_sec": 0, 00:28:31.024 "w_mbytes_per_sec": 0 00:28:31.024 }, 00:28:31.024 "claimed": false, 00:28:31.024 "zoned": false, 00:28:31.024 "supported_io_types": { 00:28:31.024 "read": true, 00:28:31.024 "write": true, 00:28:31.024 "unmap": true, 00:28:31.024 "flush": false, 00:28:31.024 "reset": true, 00:28:31.024 "nvme_admin": false, 00:28:31.024 "nvme_io": false, 00:28:31.024 "nvme_io_md": false, 00:28:31.024 "write_zeroes": true, 00:28:31.024 "zcopy": false, 00:28:31.024 "get_zone_info": false, 00:28:31.024 "zone_management": false, 00:28:31.024 "zone_append": false, 00:28:31.024 "compare": false, 00:28:31.024 "compare_and_write": false, 00:28:31.024 "abort": false, 00:28:31.024 "seek_hole": true, 00:28:31.024 "seek_data": true, 00:28:31.024 "copy": false, 00:28:31.024 "nvme_iov_md": false 00:28:31.024 }, 00:28:31.024 "driver_specific": { 00:28:31.024 "lvol": { 00:28:31.024 "lvol_store_uuid": "053e59cb-76fe-4fdc-a7f2-6abf229e5e5b", 00:28:31.024 "base_bdev": "nvme0n1", 00:28:31.024 "thin_provision": true, 00:28:31.024 "num_allocated_clusters": 0, 00:28:31.024 "snapshot": false, 00:28:31.024 "clone": false, 00:28:31.024 "esnap_clone": false 00:28:31.024 } 00:28:31.024 } 00:28:31.024 } 00:28:31.024 ]' 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:31.024 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:31.285 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:31.285 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:31.285 06:52:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:31.285 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:31.285 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:31.285 06:52:22 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:31.545 06:52:23 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:31.545 06:52:23 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:31.545 06:52:23 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:31.545 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:31.545 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:31.546 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:31.546 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:31.546 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:31.546 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:31.546 { 00:28:31.546 "name": "e8e0549d-bd44-46ce-8edc-96d3a6eecfb2", 00:28:31.546 "aliases": [ 00:28:31.546 "lvs/nvme0n1p0" 00:28:31.546 ], 00:28:31.546 "product_name": "Logical Volume", 00:28:31.546 "block_size": 4096, 00:28:31.546 "num_blocks": 26476544, 00:28:31.546 "uuid": "e8e0549d-bd44-46ce-8edc-96d3a6eecfb2", 00:28:31.546 "assigned_rate_limits": { 00:28:31.546 "rw_ios_per_sec": 0, 00:28:31.546 "rw_mbytes_per_sec": 0, 00:28:31.546 "r_mbytes_per_sec": 0, 00:28:31.546 "w_mbytes_per_sec": 0 00:28:31.546 }, 00:28:31.546 "claimed": false, 00:28:31.546 "zoned": false, 00:28:31.546 "supported_io_types": { 00:28:31.546 "read": true, 00:28:31.546 "write": true, 00:28:31.546 "unmap": true, 00:28:31.546 "flush": false, 00:28:31.546 "reset": true, 00:28:31.546 "nvme_admin": false, 00:28:31.546 "nvme_io": false, 00:28:31.546 "nvme_io_md": false, 00:28:31.546 "write_zeroes": true, 00:28:31.546 "zcopy": false, 00:28:31.546 "get_zone_info": false, 00:28:31.546 "zone_management": false, 00:28:31.546 "zone_append": false, 00:28:31.546 "compare": false, 00:28:31.546 "compare_and_write": false, 00:28:31.546 "abort": false, 00:28:31.546 "seek_hole": true, 00:28:31.546 "seek_data": true, 00:28:31.546 "copy": false, 00:28:31.546 "nvme_iov_md": false 00:28:31.546 }, 00:28:31.546 "driver_specific": { 00:28:31.546 "lvol": { 00:28:31.546 "lvol_store_uuid": "053e59cb-76fe-4fdc-a7f2-6abf229e5e5b", 00:28:31.546 "base_bdev": "nvme0n1", 00:28:31.546 "thin_provision": true, 00:28:31.546 "num_allocated_clusters": 0, 00:28:31.546 "snapshot": false, 00:28:31.546 "clone": false, 00:28:31.546 "esnap_clone": false 00:28:31.546 } 00:28:31.546 } 00:28:31.546 } 00:28:31.546 ]' 00:28:31.546 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:31.806 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:31.806 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:31.806 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:31.806 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:31.806 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:31.806 06:52:23 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:31.806 06:52:23 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:32.067 06:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:32.067 06:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:32.067 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:32.067 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:32.067 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:32.067 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:32.067 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 00:28:32.067 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:32.067 { 00:28:32.067 "name": "e8e0549d-bd44-46ce-8edc-96d3a6eecfb2", 00:28:32.067 "aliases": [ 00:28:32.067 "lvs/nvme0n1p0" 00:28:32.067 ], 00:28:32.067 "product_name": "Logical Volume", 00:28:32.067 "block_size": 4096, 00:28:32.067 "num_blocks": 26476544, 00:28:32.067 "uuid": "e8e0549d-bd44-46ce-8edc-96d3a6eecfb2", 00:28:32.067 "assigned_rate_limits": { 00:28:32.067 "rw_ios_per_sec": 0, 00:28:32.067 "rw_mbytes_per_sec": 0, 00:28:32.067 "r_mbytes_per_sec": 0, 00:28:32.067 "w_mbytes_per_sec": 0 00:28:32.067 }, 00:28:32.067 "claimed": false, 00:28:32.067 "zoned": false, 00:28:32.067 "supported_io_types": { 00:28:32.067 "read": true, 00:28:32.067 "write": true, 00:28:32.067 "unmap": true, 00:28:32.067 "flush": false, 00:28:32.067 "reset": true, 00:28:32.067 "nvme_admin": false, 00:28:32.067 "nvme_io": false, 00:28:32.067 "nvme_io_md": false, 00:28:32.067 "write_zeroes": true, 00:28:32.067 "zcopy": false, 00:28:32.067 "get_zone_info": false, 00:28:32.067 "zone_management": false, 00:28:32.067 "zone_append": false, 00:28:32.067 "compare": false, 00:28:32.067 "compare_and_write": false, 00:28:32.067 "abort": false, 00:28:32.067 "seek_hole": true, 00:28:32.067 "seek_data": true, 00:28:32.067 "copy": false, 00:28:32.067 "nvme_iov_md": false 00:28:32.067 }, 00:28:32.067 "driver_specific": { 00:28:32.067 "lvol": { 00:28:32.067 "lvol_store_uuid": "053e59cb-76fe-4fdc-a7f2-6abf229e5e5b", 00:28:32.067 "base_bdev": "nvme0n1", 00:28:32.067 "thin_provision": true, 00:28:32.067 "num_allocated_clusters": 0, 00:28:32.067 "snapshot": false, 00:28:32.067 "clone": false, 00:28:32.067 "esnap_clone": false 00:28:32.067 } 00:28:32.067 } 00:28:32.067 } 00:28:32.067 ]' 00:28:32.067 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:32.328 06:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 --l2p_dram_limit 10' 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:32.328 06:52:24 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e8e0549d-bd44-46ce-8edc-96d3a6eecfb2 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:32.328 [2024-11-19 06:52:24.236862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.328 [2024-11-19 06:52:24.236947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:32.328 [2024-11-19 06:52:24.236969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:32.328 [2024-11-19 06:52:24.236979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.328 [2024-11-19 06:52:24.237058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.328 [2024-11-19 06:52:24.237071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.328 [2024-11-19 06:52:24.237084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:32.328 [2024-11-19 06:52:24.237094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.328 [2024-11-19 06:52:24.237124] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:32.328 [2024-11-19 06:52:24.237846] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:32.328 [2024-11-19 06:52:24.237883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.328 [2024-11-19 06:52:24.237892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.329 [2024-11-19 06:52:24.237905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:28:32.329 [2024-11-19 06:52:24.237914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.329 [2024-11-19 06:52:24.238022] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e8dd10ba-9214-4021-bd1f-0d68f51e30a0 00:28:32.329 [2024-11-19 06:52:24.240390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.329 [2024-11-19 06:52:24.240446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:32.329 [2024-11-19 06:52:24.240459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:32.329 [2024-11-19 06:52:24.240475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.329 [2024-11-19 06:52:24.253464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.329 [2024-11-19 06:52:24.253520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.329 [2024-11-19 06:52:24.253537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.938 ms 00:28:32.329 [2024-11-19 06:52:24.253549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.329 [2024-11-19 06:52:24.253662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.329 [2024-11-19 06:52:24.253675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.329 [2024-11-19 06:52:24.253686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:32.329 [2024-11-19 06:52:24.253701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.329 [2024-11-19 06:52:24.253767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.329 [2024-11-19 06:52:24.253781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:32.329 [2024-11-19 06:52:24.253792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:32.329 [2024-11-19 06:52:24.253806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.329 [2024-11-19 06:52:24.253831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:32.329 [2024-11-19 06:52:24.258984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.329 [2024-11-19 06:52:24.259035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.329 [2024-11-19 06:52:24.259050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.156 ms 00:28:32.329 [2024-11-19 06:52:24.259059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.329 [2024-11-19 06:52:24.259105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.329 [2024-11-19 06:52:24.259114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:32.329 [2024-11-19 06:52:24.259125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:32.329 [2024-11-19 06:52:24.259133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.329 [2024-11-19 06:52:24.259188] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:32.329 [2024-11-19 06:52:24.259346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:32.329 [2024-11-19 06:52:24.259368] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:32.329 [2024-11-19 06:52:24.259381] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:32.329 [2024-11-19 06:52:24.259395] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:32.329 [2024-11-19 06:52:24.259405] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:32.329 [2024-11-19 06:52:24.259415] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:32.329 [2024-11-19 06:52:24.259426] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:32.329 [2024-11-19 06:52:24.259440] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:32.589 [2024-11-19 06:52:24.259448] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:32.589 [2024-11-19 06:52:24.259459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.589 [2024-11-19 06:52:24.259467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:32.589 [2024-11-19 06:52:24.259479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:28:32.589 [2024-11-19 06:52:24.259502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.589 [2024-11-19 06:52:24.259623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.589 [2024-11-19 06:52:24.259674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:32.589 [2024-11-19 06:52:24.259686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:28:32.589 [2024-11-19 06:52:24.259696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.589 [2024-11-19 06:52:24.259809] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:32.590 [2024-11-19 06:52:24.259830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:32.590 [2024-11-19 06:52:24.259842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.590 [2024-11-19 06:52:24.259851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.590 [2024-11-19 06:52:24.259862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:32.590 [2024-11-19 06:52:24.259870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:32.590 [2024-11-19 06:52:24.259880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:32.590 [2024-11-19 06:52:24.259889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:32.590 [2024-11-19 06:52:24.259899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:32.590 [2024-11-19 06:52:24.259906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.590 [2024-11-19 06:52:24.259916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:32.590 [2024-11-19 06:52:24.259941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:32.590 [2024-11-19 06:52:24.259952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.590 [2024-11-19 06:52:24.259960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:32.590 [2024-11-19 06:52:24.259970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:32.590 [2024-11-19 06:52:24.259977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.590 [2024-11-19 06:52:24.259992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:32.590 [2024-11-19 06:52:24.260001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:32.590 [2024-11-19 06:52:24.260014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.590 [2024-11-19 06:52:24.260022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:32.590 [2024-11-19 06:52:24.260032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:32.590 [2024-11-19 06:52:24.260039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.590 [2024-11-19 06:52:24.260051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:32.590 [2024-11-19 06:52:24.260058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:32.590 [2024-11-19 06:52:24.260071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.590 [2024-11-19 06:52:24.260079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:32.590 [2024-11-19 06:52:24.260089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:32.590 [2024-11-19 06:52:24.260097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.590 [2024-11-19 06:52:24.260106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:32.590 [2024-11-19 06:52:24.260113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:32.590 [2024-11-19 06:52:24.260123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.590 [2024-11-19 06:52:24.260131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:32.590 [2024-11-19 06:52:24.260144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:32.590 [2024-11-19 06:52:24.260152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.590 [2024-11-19 06:52:24.260162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:32.590 [2024-11-19 06:52:24.260170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:32.590 [2024-11-19 06:52:24.260180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.590 [2024-11-19 06:52:24.260188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:32.590 [2024-11-19 06:52:24.260197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:32.590 [2024-11-19 06:52:24.260205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.590 [2024-11-19 06:52:24.260215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:32.590 [2024-11-19 06:52:24.260223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:32.590 [2024-11-19 06:52:24.260232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.590 [2024-11-19 06:52:24.260238] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:32.590 [2024-11-19 06:52:24.260250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:32.590 [2024-11-19 06:52:24.260258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.590 [2024-11-19 06:52:24.260268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.590 [2024-11-19 06:52:24.260277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:32.590 [2024-11-19 06:52:24.260288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:32.590 [2024-11-19 06:52:24.260295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:32.590 [2024-11-19 06:52:24.260305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:32.590 [2024-11-19 06:52:24.260316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:32.590 [2024-11-19 06:52:24.260327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:32.590 [2024-11-19 06:52:24.260340] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:32.590 [2024-11-19 06:52:24.260352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.590 [2024-11-19 06:52:24.260363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:32.590 [2024-11-19 06:52:24.260373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:32.590 [2024-11-19 06:52:24.260381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:32.590 [2024-11-19 06:52:24.260391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:32.590 [2024-11-19 06:52:24.260398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:32.590 [2024-11-19 06:52:24.260408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:32.590 [2024-11-19 06:52:24.260415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:32.590 [2024-11-19 06:52:24.260424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:32.590 [2024-11-19 06:52:24.260432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:32.590 [2024-11-19 06:52:24.260446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:32.590 [2024-11-19 06:52:24.260453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:32.590 [2024-11-19 06:52:24.260463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:32.590 [2024-11-19 06:52:24.260470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:32.590 [2024-11-19 06:52:24.260480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:32.590 [2024-11-19 06:52:24.260488] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:32.590 [2024-11-19 06:52:24.260500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.590 [2024-11-19 06:52:24.260509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:32.590 [2024-11-19 06:52:24.260519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:32.590 [2024-11-19 06:52:24.260527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:32.590 [2024-11-19 06:52:24.260536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:32.590 [2024-11-19 06:52:24.260546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.590 [2024-11-19 06:52:24.260557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:32.590 [2024-11-19 06:52:24.260565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:28:32.590 [2024-11-19 06:52:24.260576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.590 [2024-11-19 06:52:24.260617] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:32.590 [2024-11-19 06:52:24.260633] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:35.889 [2024-11-19 06:52:27.710846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.889 [2024-11-19 06:52:27.710941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:35.889 [2024-11-19 06:52:27.710960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3450.211 ms 00:28:35.889 [2024-11-19 06:52:27.710973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.889 [2024-11-19 06:52:27.748790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.889 [2024-11-19 06:52:27.748861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.889 [2024-11-19 06:52:27.748876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.552 ms 00:28:35.889 [2024-11-19 06:52:27.748889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.889 [2024-11-19 06:52:27.749060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.889 [2024-11-19 06:52:27.749077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:35.889 [2024-11-19 06:52:27.749087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:35.889 [2024-11-19 06:52:27.749103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.889 [2024-11-19 06:52:27.789954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.889 [2024-11-19 06:52:27.790010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.889 [2024-11-19 06:52:27.790024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.809 ms 00:28:35.889 [2024-11-19 06:52:27.790035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.889 [2024-11-19 06:52:27.790073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.889 [2024-11-19 06:52:27.790089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.889 [2024-11-19 06:52:27.790098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:35.889 [2024-11-19 06:52:27.790110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.889 [2024-11-19 06:52:27.790857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.889 [2024-11-19 06:52:27.790907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.889 [2024-11-19 06:52:27.790919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:28:35.889 [2024-11-19 06:52:27.790950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.889 [2024-11-19 06:52:27.791072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.889 [2024-11-19 06:52:27.791086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.889 [2024-11-19 06:52:27.791100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:35.889 [2024-11-19 06:52:27.791115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.889 [2024-11-19 06:52:27.811803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.889 [2024-11-19 06:52:27.811855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.889 [2024-11-19 06:52:27.811867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.667 ms 00:28:35.889 [2024-11-19 06:52:27.811879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.150 [2024-11-19 06:52:27.826846] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:36.150 [2024-11-19 06:52:27.831916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.150 [2024-11-19 06:52:27.831976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:36.150 [2024-11-19 06:52:27.831992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.911 ms 00:28:36.150 [2024-11-19 06:52:27.832001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.150 [2024-11-19 06:52:27.935963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.150 [2024-11-19 06:52:27.936023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:36.150 [2024-11-19 06:52:27.936043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.923 ms 00:28:36.150 [2024-11-19 06:52:27.936052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.150 [2024-11-19 06:52:27.936279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.150 [2024-11-19 06:52:27.936298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:36.150 [2024-11-19 06:52:27.936315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:28:36.150 [2024-11-19 06:52:27.936325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.150 [2024-11-19 06:52:27.962446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.150 [2024-11-19 06:52:27.962496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:36.150 [2024-11-19 06:52:27.962514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.063 ms 00:28:36.150 [2024-11-19 06:52:27.962523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.150 [2024-11-19 06:52:27.987734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.150 [2024-11-19 06:52:27.987781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:36.150 [2024-11-19 06:52:27.987797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.151 ms 00:28:36.150 [2024-11-19 06:52:27.987806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.150 [2024-11-19 06:52:27.988454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.150 [2024-11-19 06:52:27.988483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:36.150 [2024-11-19 06:52:27.988497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:28:36.150 [2024-11-19 06:52:27.988505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.150 [2024-11-19 06:52:28.074688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.150 [2024-11-19 06:52:28.074742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:36.150 [2024-11-19 06:52:28.074763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.130 ms 00:28:36.150 [2024-11-19 06:52:28.074773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.411 [2024-11-19 06:52:28.104048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.411 [2024-11-19 06:52:28.104101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:36.411 [2024-11-19 06:52:28.104118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.171 ms 00:28:36.411 [2024-11-19 06:52:28.104127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.411 [2024-11-19 06:52:28.130326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.411 [2024-11-19 06:52:28.130376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:36.411 [2024-11-19 06:52:28.130391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.141 ms 00:28:36.411 [2024-11-19 06:52:28.130400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.411 [2024-11-19 06:52:28.156714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.411 [2024-11-19 06:52:28.156768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:36.411 [2024-11-19 06:52:28.156785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.255 ms 00:28:36.411 [2024-11-19 06:52:28.156793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.411 [2024-11-19 06:52:28.156854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.411 [2024-11-19 06:52:28.156865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:36.411 [2024-11-19 06:52:28.156881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:36.411 [2024-11-19 06:52:28.156890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.411 [2024-11-19 06:52:28.157006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.412 [2024-11-19 06:52:28.157019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:36.412 [2024-11-19 06:52:28.157035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:36.412 [2024-11-19 06:52:28.157043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.412 [2024-11-19 06:52:28.158451] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3920.995 ms, result 0 00:28:36.412 { 00:28:36.412 "name": "ftl0", 00:28:36.412 "uuid": "e8dd10ba-9214-4021-bd1f-0d68f51e30a0" 00:28:36.412 } 00:28:36.412 06:52:28 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:36.412 06:52:28 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:36.673 06:52:28 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:36.673 06:52:28 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:36.673 [2024-11-19 06:52:28.541485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.673 [2024-11-19 06:52:28.541543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:36.673 [2024-11-19 06:52:28.541555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:36.673 [2024-11-19 06:52:28.541574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.673 [2024-11-19 06:52:28.541601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:36.673 [2024-11-19 06:52:28.545053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.673 [2024-11-19 06:52:28.545097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:36.673 [2024-11-19 06:52:28.545112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.428 ms 00:28:36.673 [2024-11-19 06:52:28.545121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.673 [2024-11-19 06:52:28.545417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.673 [2024-11-19 06:52:28.545432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:36.673 [2024-11-19 06:52:28.545449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:28:36.673 [2024-11-19 06:52:28.545458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.673 [2024-11-19 06:52:28.548736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.673 [2024-11-19 06:52:28.548765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:36.673 [2024-11-19 06:52:28.548778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:28:36.673 [2024-11-19 06:52:28.548789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.673 [2024-11-19 06:52:28.554991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.673 [2024-11-19 06:52:28.555034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:36.673 [2024-11-19 06:52:28.555053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.178 ms 00:28:36.673 [2024-11-19 06:52:28.555061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.673 [2024-11-19 06:52:28.580658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.673 [2024-11-19 06:52:28.580706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:36.673 [2024-11-19 06:52:28.580721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.518 ms 00:28:36.673 [2024-11-19 06:52:28.580729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.673 [2024-11-19 06:52:28.599150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.673 [2024-11-19 06:52:28.599196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:36.673 [2024-11-19 06:52:28.599209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.362 ms 00:28:36.673 [2024-11-19 06:52:28.599216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.673 [2024-11-19 06:52:28.599379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.673 [2024-11-19 06:52:28.599391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:36.673 [2024-11-19 06:52:28.599404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:28:36.673 [2024-11-19 06:52:28.599412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.935 [2024-11-19 06:52:28.620359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.935 [2024-11-19 06:52:28.620400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:36.935 [2024-11-19 06:52:28.620412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.926 ms 00:28:36.935 [2024-11-19 06:52:28.620418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.935 [2024-11-19 06:52:28.639955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.935 [2024-11-19 06:52:28.639990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:36.935 [2024-11-19 06:52:28.640001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.492 ms 00:28:36.935 [2024-11-19 06:52:28.640008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.935 [2024-11-19 06:52:28.658547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.935 [2024-11-19 06:52:28.658581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:36.935 [2024-11-19 06:52:28.658592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.497 ms 00:28:36.935 [2024-11-19 06:52:28.658598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.935 [2024-11-19 06:52:28.677023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.935 [2024-11-19 06:52:28.677054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:36.935 [2024-11-19 06:52:28.677064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.355 ms 00:28:36.935 [2024-11-19 06:52:28.677070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.936 [2024-11-19 06:52:28.677103] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:36.936 [2024-11-19 06:52:28.677115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:36.936 [2024-11-19 06:52:28.677731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:36.937 [2024-11-19 06:52:28.677831] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:36.937 [2024-11-19 06:52:28.677842] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8dd10ba-9214-4021-bd1f-0d68f51e30a0 00:28:36.937 [2024-11-19 06:52:28.677849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:36.937 [2024-11-19 06:52:28.677859] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:36.937 [2024-11-19 06:52:28.677865] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:36.937 [2024-11-19 06:52:28.677875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:36.937 [2024-11-19 06:52:28.677882] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:36.937 [2024-11-19 06:52:28.677890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:36.937 [2024-11-19 06:52:28.677897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:36.937 [2024-11-19 06:52:28.677904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:36.937 [2024-11-19 06:52:28.677909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:36.937 [2024-11-19 06:52:28.677916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.937 [2024-11-19 06:52:28.677933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:36.937 [2024-11-19 06:52:28.677943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:28:36.937 [2024-11-19 06:52:28.677949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.687971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.937 [2024-11-19 06:52:28.687998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:36.937 [2024-11-19 06:52:28.688010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.991 ms 00:28:36.937 [2024-11-19 06:52:28.688016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.688289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.937 [2024-11-19 06:52:28.688297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:36.937 [2024-11-19 06:52:28.688306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:28:36.937 [2024-11-19 06:52:28.688314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.724090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.724118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:36.937 [2024-11-19 06:52:28.724129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.724135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.724187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.724194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:36.937 [2024-11-19 06:52:28.724201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.724209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.724279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.724288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:36.937 [2024-11-19 06:52:28.724296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.724302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.724320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.724328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:36.937 [2024-11-19 06:52:28.724335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.724341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.788149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.788182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:36.937 [2024-11-19 06:52:28.788193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.788199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.840210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.840244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:36.937 [2024-11-19 06:52:28.840255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.840263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.840337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.840345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.937 [2024-11-19 06:52:28.840353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.840359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.840416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.840425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.937 [2024-11-19 06:52:28.840433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.840439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.840521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.840529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.937 [2024-11-19 06:52:28.840538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.840544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.840573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.840580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:36.937 [2024-11-19 06:52:28.840589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.840594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.840632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.840641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:36.937 [2024-11-19 06:52:28.840649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.840655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.840696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.937 [2024-11-19 06:52:28.840704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:36.937 [2024-11-19 06:52:28.840712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.937 [2024-11-19 06:52:28.840719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.937 [2024-11-19 06:52:28.840839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 299.325 ms, result 0 00:28:36.937 true 00:28:36.937 06:52:28 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81130 00:28:36.937 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81130 ']' 00:28:36.937 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81130 00:28:36.937 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:28:37.198 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.198 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81130 00:28:37.198 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:37.198 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:37.198 killing process with pid 81130 00:28:37.198 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81130' 00:28:37.198 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 81130 00:28:37.198 06:52:28 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 81130 00:28:42.494 06:52:34 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:46.694 262144+0 records in 00:28:46.694 262144+0 records out 00:28:46.694 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.86038 s, 278 MB/s 00:28:46.694 06:52:38 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:48.608 06:52:40 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:48.608 [2024-11-19 06:52:40.310321] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:48.608 [2024-11-19 06:52:40.310528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81361 ] 00:28:48.608 [2024-11-19 06:52:40.465383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.869 [2024-11-19 06:52:40.566700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.131 [2024-11-19 06:52:40.855088] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:49.131 [2024-11-19 06:52:40.855164] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:49.131 [2024-11-19 06:52:41.015975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.131 [2024-11-19 06:52:41.016036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:49.131 [2024-11-19 06:52:41.016057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:49.131 [2024-11-19 06:52:41.016067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.131 [2024-11-19 06:52:41.016120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.131 [2024-11-19 06:52:41.016132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:49.131 [2024-11-19 06:52:41.016143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:49.131 [2024-11-19 06:52:41.016151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.131 [2024-11-19 06:52:41.016171] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:49.131 [2024-11-19 06:52:41.016942] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:49.131 [2024-11-19 06:52:41.016980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.131 [2024-11-19 06:52:41.016988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:49.131 [2024-11-19 06:52:41.016998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:28:49.131 [2024-11-19 06:52:41.017006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.131 [2024-11-19 06:52:41.018688] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:49.132 [2024-11-19 06:52:41.032759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.132 [2024-11-19 06:52:41.032811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:49.132 [2024-11-19 06:52:41.032824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.074 ms 00:28:49.132 [2024-11-19 06:52:41.032832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.132 [2024-11-19 06:52:41.032910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.132 [2024-11-19 06:52:41.032920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:49.132 [2024-11-19 06:52:41.032946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:49.132 [2024-11-19 06:52:41.032954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.132 [2024-11-19 06:52:41.041034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.132 [2024-11-19 06:52:41.041073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:49.132 [2024-11-19 06:52:41.041084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.002 ms 00:28:49.132 [2024-11-19 06:52:41.041093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.132 [2024-11-19 06:52:41.041176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.132 [2024-11-19 06:52:41.041186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:49.132 [2024-11-19 06:52:41.041195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:49.132 [2024-11-19 06:52:41.041205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.132 [2024-11-19 06:52:41.041249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.132 [2024-11-19 06:52:41.041261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:49.132 [2024-11-19 06:52:41.041270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:49.132 [2024-11-19 06:52:41.041278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.132 [2024-11-19 06:52:41.041302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:49.132 [2024-11-19 06:52:41.045253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.132 [2024-11-19 06:52:41.045307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:49.132 [2024-11-19 06:52:41.045318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.957 ms 00:28:49.132 [2024-11-19 06:52:41.045330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.132 [2024-11-19 06:52:41.045365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.132 [2024-11-19 06:52:41.045373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:49.132 [2024-11-19 06:52:41.045383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:49.132 [2024-11-19 06:52:41.045391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.132 [2024-11-19 06:52:41.045442] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:49.132 [2024-11-19 06:52:41.045466] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:49.132 [2024-11-19 06:52:41.045504] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:49.132 [2024-11-19 06:52:41.045525] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:49.132 [2024-11-19 06:52:41.045633] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:49.132 [2024-11-19 06:52:41.045651] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:49.132 [2024-11-19 06:52:41.045663] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:49.132 [2024-11-19 06:52:41.045674] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:49.132 [2024-11-19 06:52:41.045683] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:49.132 [2024-11-19 06:52:41.045694] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:49.132 [2024-11-19 06:52:41.045704] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:49.132 [2024-11-19 06:52:41.045714] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:49.132 [2024-11-19 06:52:41.045722] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:49.132 [2024-11-19 06:52:41.045734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.132 [2024-11-19 06:52:41.045742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:49.132 [2024-11-19 06:52:41.045750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:28:49.132 [2024-11-19 06:52:41.045758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.132 [2024-11-19 06:52:41.045842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.132 [2024-11-19 06:52:41.045853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:49.132 [2024-11-19 06:52:41.045862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:49.132 [2024-11-19 06:52:41.045870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.132 [2024-11-19 06:52:41.045989] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:49.132 [2024-11-19 06:52:41.046016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:49.132 [2024-11-19 06:52:41.046026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:49.132 [2024-11-19 06:52:41.046035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:49.132 [2024-11-19 06:52:41.046051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:49.132 [2024-11-19 06:52:41.046067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:49.132 [2024-11-19 06:52:41.046075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:49.132 [2024-11-19 06:52:41.046090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:49.132 [2024-11-19 06:52:41.046097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:49.132 [2024-11-19 06:52:41.046104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:49.132 [2024-11-19 06:52:41.046114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:49.132 [2024-11-19 06:52:41.046123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:49.132 [2024-11-19 06:52:41.046136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:49.132 [2024-11-19 06:52:41.046150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:49.132 [2024-11-19 06:52:41.046158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:49.132 [2024-11-19 06:52:41.046173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.132 [2024-11-19 06:52:41.046186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:49.132 [2024-11-19 06:52:41.046192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.132 [2024-11-19 06:52:41.046205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:49.132 [2024-11-19 06:52:41.046213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.132 [2024-11-19 06:52:41.046227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:49.132 [2024-11-19 06:52:41.046234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.132 [2024-11-19 06:52:41.046247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:49.132 [2024-11-19 06:52:41.046254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:49.132 [2024-11-19 06:52:41.046268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:49.132 [2024-11-19 06:52:41.046274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:49.132 [2024-11-19 06:52:41.046281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:49.132 [2024-11-19 06:52:41.046287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:49.132 [2024-11-19 06:52:41.046294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:49.132 [2024-11-19 06:52:41.046300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:49.132 [2024-11-19 06:52:41.046313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:49.132 [2024-11-19 06:52:41.046319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046327] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:49.132 [2024-11-19 06:52:41.046335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:49.132 [2024-11-19 06:52:41.046345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:49.132 [2024-11-19 06:52:41.046354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.132 [2024-11-19 06:52:41.046362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:49.132 [2024-11-19 06:52:41.046369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:49.132 [2024-11-19 06:52:41.046377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:49.132 [2024-11-19 06:52:41.046385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:49.132 [2024-11-19 06:52:41.046392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:49.132 [2024-11-19 06:52:41.046399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:49.132 [2024-11-19 06:52:41.046407] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:49.133 [2024-11-19 06:52:41.046419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:49.133 [2024-11-19 06:52:41.046429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:49.133 [2024-11-19 06:52:41.046436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:49.133 [2024-11-19 06:52:41.046443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:49.133 [2024-11-19 06:52:41.046456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:49.133 [2024-11-19 06:52:41.046464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:49.133 [2024-11-19 06:52:41.046473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:49.133 [2024-11-19 06:52:41.046482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:49.133 [2024-11-19 06:52:41.046489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:49.133 [2024-11-19 06:52:41.046497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:49.133 [2024-11-19 06:52:41.046504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:49.133 [2024-11-19 06:52:41.046511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:49.133 [2024-11-19 06:52:41.046520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:49.133 [2024-11-19 06:52:41.046527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:49.133 [2024-11-19 06:52:41.046534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:49.133 [2024-11-19 06:52:41.046541] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:49.133 [2024-11-19 06:52:41.046552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:49.133 [2024-11-19 06:52:41.046561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:49.133 [2024-11-19 06:52:41.046569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:49.133 [2024-11-19 06:52:41.046576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:49.133 [2024-11-19 06:52:41.046583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:49.133 [2024-11-19 06:52:41.046593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.133 [2024-11-19 06:52:41.046600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:49.133 [2024-11-19 06:52:41.046610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:28:49.133 [2024-11-19 06:52:41.046618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.078578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.078627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:49.395 [2024-11-19 06:52:41.078639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.916 ms 00:28:49.395 [2024-11-19 06:52:41.078648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.078744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.078753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:49.395 [2024-11-19 06:52:41.078762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:49.395 [2024-11-19 06:52:41.078770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.129375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.129430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:49.395 [2024-11-19 06:52:41.129444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.547 ms 00:28:49.395 [2024-11-19 06:52:41.129454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.129503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.129514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:49.395 [2024-11-19 06:52:41.129524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:49.395 [2024-11-19 06:52:41.129536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.130131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.130166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:49.395 [2024-11-19 06:52:41.130178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:28:49.395 [2024-11-19 06:52:41.130187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.130346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.130359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:49.395 [2024-11-19 06:52:41.130369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:28:49.395 [2024-11-19 06:52:41.130384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.145893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.145961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:49.395 [2024-11-19 06:52:41.145975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.489 ms 00:28:49.395 [2024-11-19 06:52:41.145984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.160023] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:49.395 [2024-11-19 06:52:41.160074] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:49.395 [2024-11-19 06:52:41.160089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.160099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:49.395 [2024-11-19 06:52:41.160110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.999 ms 00:28:49.395 [2024-11-19 06:52:41.160117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.185797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.185845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:49.395 [2024-11-19 06:52:41.185864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.628 ms 00:28:49.395 [2024-11-19 06:52:41.185873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.198226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.198281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:49.395 [2024-11-19 06:52:41.198293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.303 ms 00:28:49.395 [2024-11-19 06:52:41.198302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.210761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.210804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:49.395 [2024-11-19 06:52:41.210816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.415 ms 00:28:49.395 [2024-11-19 06:52:41.210824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.211479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.211514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:49.395 [2024-11-19 06:52:41.211525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:28:49.395 [2024-11-19 06:52:41.211534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.274581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.274646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:49.395 [2024-11-19 06:52:41.274661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.024 ms 00:28:49.395 [2024-11-19 06:52:41.274677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.395 [2024-11-19 06:52:41.285807] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:49.395 [2024-11-19 06:52:41.288671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.395 [2024-11-19 06:52:41.288717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:49.396 [2024-11-19 06:52:41.288731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.938 ms 00:28:49.396 [2024-11-19 06:52:41.288740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.396 [2024-11-19 06:52:41.288823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.396 [2024-11-19 06:52:41.288835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:49.396 [2024-11-19 06:52:41.288845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:49.396 [2024-11-19 06:52:41.288854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.396 [2024-11-19 06:52:41.288948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.396 [2024-11-19 06:52:41.288961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:49.396 [2024-11-19 06:52:41.288971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:49.396 [2024-11-19 06:52:41.288979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.396 [2024-11-19 06:52:41.289001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.396 [2024-11-19 06:52:41.289010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:49.396 [2024-11-19 06:52:41.289019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:49.396 [2024-11-19 06:52:41.289028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.396 [2024-11-19 06:52:41.289066] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:49.396 [2024-11-19 06:52:41.289080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.396 [2024-11-19 06:52:41.289091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:49.396 [2024-11-19 06:52:41.289100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:49.396 [2024-11-19 06:52:41.289110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.396 [2024-11-19 06:52:41.314400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.396 [2024-11-19 06:52:41.314447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:49.396 [2024-11-19 06:52:41.314460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.269 ms 00:28:49.396 [2024-11-19 06:52:41.314469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.396 [2024-11-19 06:52:41.314561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.396 [2024-11-19 06:52:41.314573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:49.396 [2024-11-19 06:52:41.314583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:49.396 [2024-11-19 06:52:41.314591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.396 [2024-11-19 06:52:41.315821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.349 ms, result 0 00:28:50.782  [2024-11-19T06:52:43.379Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-19T06:52:44.337Z] Copying: 44/1024 [MB] (21 MBps) [2024-11-19T06:52:45.723Z] Copying: 74/1024 [MB] (29 MBps) [2024-11-19T06:52:46.668Z] Copying: 99/1024 [MB] (24 MBps) [2024-11-19T06:52:47.612Z] Copying: 118/1024 [MB] (19 MBps) [2024-11-19T06:52:48.556Z] Copying: 135/1024 [MB] (17 MBps) [2024-11-19T06:52:49.497Z] Copying: 154/1024 [MB] (18 MBps) [2024-11-19T06:52:50.440Z] Copying: 170/1024 [MB] (16 MBps) [2024-11-19T06:52:51.385Z] Copying: 189/1024 [MB] (18 MBps) [2024-11-19T06:52:52.332Z] Copying: 207/1024 [MB] (18 MBps) [2024-11-19T06:52:53.716Z] Copying: 225/1024 [MB] (18 MBps) [2024-11-19T06:52:54.659Z] Copying: 243/1024 [MB] (17 MBps) [2024-11-19T06:52:55.599Z] Copying: 253/1024 [MB] (10 MBps) [2024-11-19T06:52:56.544Z] Copying: 283/1024 [MB] (30 MBps) [2024-11-19T06:52:57.490Z] Copying: 296/1024 [MB] (13 MBps) [2024-11-19T06:52:58.436Z] Copying: 307/1024 [MB] (10 MBps) [2024-11-19T06:52:59.382Z] Copying: 319/1024 [MB] (12 MBps) [2024-11-19T06:53:00.326Z] Copying: 337/1024 [MB] (18 MBps) [2024-11-19T06:53:01.715Z] Copying: 354/1024 [MB] (16 MBps) [2024-11-19T06:53:02.655Z] Copying: 371/1024 [MB] (17 MBps) [2024-11-19T06:53:03.600Z] Copying: 392/1024 [MB] (20 MBps) [2024-11-19T06:53:04.541Z] Copying: 411/1024 [MB] (19 MBps) [2024-11-19T06:53:05.484Z] Copying: 433/1024 [MB] (22 MBps) [2024-11-19T06:53:06.425Z] Copying: 450/1024 [MB] (16 MBps) [2024-11-19T06:53:07.368Z] Copying: 474/1024 [MB] (23 MBps) [2024-11-19T06:53:08.751Z] Copying: 490/1024 [MB] (15 MBps) [2024-11-19T06:53:09.694Z] Copying: 510/1024 [MB] (19 MBps) [2024-11-19T06:53:10.636Z] Copying: 524/1024 [MB] (14 MBps) [2024-11-19T06:53:11.577Z] Copying: 544/1024 [MB] (19 MBps) [2024-11-19T06:53:12.516Z] Copying: 560/1024 [MB] (16 MBps) [2024-11-19T06:53:13.457Z] Copying: 580/1024 [MB] (20 MBps) [2024-11-19T06:53:14.397Z] Copying: 601/1024 [MB] (20 MBps) [2024-11-19T06:53:15.340Z] Copying: 625/1024 [MB] (23 MBps) [2024-11-19T06:53:16.340Z] Copying: 644/1024 [MB] (18 MBps) [2024-11-19T06:53:17.727Z] Copying: 658/1024 [MB] (14 MBps) [2024-11-19T06:53:18.665Z] Copying: 672/1024 [MB] (13 MBps) [2024-11-19T06:53:19.604Z] Copying: 696/1024 [MB] (24 MBps) [2024-11-19T06:53:20.548Z] Copying: 740/1024 [MB] (43 MBps) [2024-11-19T06:53:21.493Z] Copying: 751/1024 [MB] (11 MBps) [2024-11-19T06:53:22.438Z] Copying: 770/1024 [MB] (18 MBps) [2024-11-19T06:53:23.383Z] Copying: 786/1024 [MB] (16 MBps) [2024-11-19T06:53:24.766Z] Copying: 802/1024 [MB] (15 MBps) [2024-11-19T06:53:25.339Z] Copying: 824/1024 [MB] (22 MBps) [2024-11-19T06:53:26.726Z] Copying: 835/1024 [MB] (10 MBps) [2024-11-19T06:53:27.669Z] Copying: 865400/1048576 [kB] (10224 kBps) [2024-11-19T06:53:28.611Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-19T06:53:29.555Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-19T06:53:30.496Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-19T06:53:31.440Z] Copying: 887/1024 [MB] (11 MBps) [2024-11-19T06:53:32.386Z] Copying: 900/1024 [MB] (12 MBps) [2024-11-19T06:53:33.332Z] Copying: 915/1024 [MB] (14 MBps) [2024-11-19T06:53:34.715Z] Copying: 926/1024 [MB] (11 MBps) [2024-11-19T06:53:35.659Z] Copying: 939/1024 [MB] (12 MBps) [2024-11-19T06:53:36.601Z] Copying: 976/1024 [MB] (37 MBps) [2024-11-19T06:53:37.546Z] Copying: 998/1024 [MB] (21 MBps) [2024-11-19T06:53:37.806Z] Copying: 1015/1024 [MB] (17 MBps) [2024-11-19T06:53:37.806Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-19 06:53:37.658265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.877 [2024-11-19 06:53:37.658319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:45.877 [2024-11-19 06:53:37.658335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:45.877 [2024-11-19 06:53:37.658344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.877 [2024-11-19 06:53:37.658367] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:45.877 [2024-11-19 06:53:37.661442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.877 [2024-11-19 06:53:37.661481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:45.877 [2024-11-19 06:53:37.661492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:29:45.877 [2024-11-19 06:53:37.661500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.877 [2024-11-19 06:53:37.663767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.877 [2024-11-19 06:53:37.663807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:45.877 [2024-11-19 06:53:37.663817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:29:45.877 [2024-11-19 06:53:37.663826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.877 [2024-11-19 06:53:37.663853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.877 [2024-11-19 06:53:37.663862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:45.877 [2024-11-19 06:53:37.663872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:45.877 [2024-11-19 06:53:37.663879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.877 [2024-11-19 06:53:37.663951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.877 [2024-11-19 06:53:37.663963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:45.877 [2024-11-19 06:53:37.663972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:45.877 [2024-11-19 06:53:37.663980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.877 [2024-11-19 06:53:37.663994] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:45.877 [2024-11-19 06:53:37.664007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:45.877 [2024-11-19 06:53:37.664085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:45.878 [2024-11-19 06:53:37.664757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:45.879 [2024-11-19 06:53:37.664772] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:45.879 [2024-11-19 06:53:37.664780] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8dd10ba-9214-4021-bd1f-0d68f51e30a0 00:29:45.879 [2024-11-19 06:53:37.664788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:45.879 [2024-11-19 06:53:37.664796] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:45.879 [2024-11-19 06:53:37.664803] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:45.879 [2024-11-19 06:53:37.664810] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:45.879 [2024-11-19 06:53:37.664820] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:45.879 [2024-11-19 06:53:37.664828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:45.879 [2024-11-19 06:53:37.664835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:45.879 [2024-11-19 06:53:37.664841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:45.879 [2024-11-19 06:53:37.664848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:45.879 [2024-11-19 06:53:37.664855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.879 [2024-11-19 06:53:37.664863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:45.879 [2024-11-19 06:53:37.664871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:29:45.879 [2024-11-19 06:53:37.664879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.879 [2024-11-19 06:53:37.678513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.879 [2024-11-19 06:53:37.678550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:45.879 [2024-11-19 06:53:37.678567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.619 ms 00:29:45.879 [2024-11-19 06:53:37.678576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.879 [2024-11-19 06:53:37.678971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.879 [2024-11-19 06:53:37.678986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:45.879 [2024-11-19 06:53:37.678996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:29:45.879 [2024-11-19 06:53:37.679003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.879 [2024-11-19 06:53:37.715219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.879 [2024-11-19 06:53:37.715263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:45.879 [2024-11-19 06:53:37.715274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.879 [2024-11-19 06:53:37.715281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.879 [2024-11-19 06:53:37.715349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.879 [2024-11-19 06:53:37.715358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:45.879 [2024-11-19 06:53:37.715366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.879 [2024-11-19 06:53:37.715374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.879 [2024-11-19 06:53:37.715428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.879 [2024-11-19 06:53:37.715438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:45.879 [2024-11-19 06:53:37.715450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.879 [2024-11-19 06:53:37.715457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.879 [2024-11-19 06:53:37.715472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.879 [2024-11-19 06:53:37.715480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:45.879 [2024-11-19 06:53:37.715487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.879 [2024-11-19 06:53:37.715499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.879 [2024-11-19 06:53:37.800312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.879 [2024-11-19 06:53:37.800360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:45.879 [2024-11-19 06:53:37.800380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.879 [2024-11-19 06:53:37.800389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.140 [2024-11-19 06:53:37.869119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.140 [2024-11-19 06:53:37.869171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:46.140 [2024-11-19 06:53:37.869183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.140 [2024-11-19 06:53:37.869193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.140 [2024-11-19 06:53:37.869277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.140 [2024-11-19 06:53:37.869287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:46.140 [2024-11-19 06:53:37.869297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.140 [2024-11-19 06:53:37.869310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.140 [2024-11-19 06:53:37.869348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.140 [2024-11-19 06:53:37.869357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:46.140 [2024-11-19 06:53:37.869366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.140 [2024-11-19 06:53:37.869375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.140 [2024-11-19 06:53:37.869455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.140 [2024-11-19 06:53:37.869465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:46.140 [2024-11-19 06:53:37.869474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.140 [2024-11-19 06:53:37.869482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.140 [2024-11-19 06:53:37.869520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.140 [2024-11-19 06:53:37.869530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:46.140 [2024-11-19 06:53:37.869538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.140 [2024-11-19 06:53:37.869546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.140 [2024-11-19 06:53:37.869587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.140 [2024-11-19 06:53:37.869597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:46.140 [2024-11-19 06:53:37.869606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.140 [2024-11-19 06:53:37.869614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.140 [2024-11-19 06:53:37.869662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.140 [2024-11-19 06:53:37.869673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:46.140 [2024-11-19 06:53:37.869681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.140 [2024-11-19 06:53:37.869689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.140 [2024-11-19 06:53:37.869825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.519 ms, result 0 00:29:46.712 00:29:46.712 00:29:46.712 06:53:38 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:46.972 [2024-11-19 06:53:38.675398] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:46.972 [2024-11-19 06:53:38.675550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81951 ] 00:29:46.972 [2024-11-19 06:53:38.841442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.233 [2024-11-19 06:53:38.958971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.498 [2024-11-19 06:53:39.244614] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:47.498 [2024-11-19 06:53:39.244695] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:47.498 [2024-11-19 06:53:39.406075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.406131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:47.498 [2024-11-19 06:53:39.406152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:47.498 [2024-11-19 06:53:39.406161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.406214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.406225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:47.498 [2024-11-19 06:53:39.406238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:47.498 [2024-11-19 06:53:39.406245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.406266] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:47.498 [2024-11-19 06:53:39.407461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:47.498 [2024-11-19 06:53:39.407513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.407523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:47.498 [2024-11-19 06:53:39.407534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:29:47.498 [2024-11-19 06:53:39.407543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.407859] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:47.498 [2024-11-19 06:53:39.407886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.407896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:47.498 [2024-11-19 06:53:39.407908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:47.498 [2024-11-19 06:53:39.407916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.408122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.408161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:47.498 [2024-11-19 06:53:39.408172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:47.498 [2024-11-19 06:53:39.408181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.408471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.408492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:47.498 [2024-11-19 06:53:39.408502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:29:47.498 [2024-11-19 06:53:39.408510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.408581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.408590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:47.498 [2024-11-19 06:53:39.408599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:47.498 [2024-11-19 06:53:39.408607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.408630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.408640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:47.498 [2024-11-19 06:53:39.408648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:47.498 [2024-11-19 06:53:39.408659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.408678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:47.498 [2024-11-19 06:53:39.412964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.413004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:47.498 [2024-11-19 06:53:39.413014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.292 ms 00:29:47.498 [2024-11-19 06:53:39.413022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.413061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.413069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:47.498 [2024-11-19 06:53:39.413077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:47.498 [2024-11-19 06:53:39.413084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.413141] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:47.498 [2024-11-19 06:53:39.413166] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:47.498 [2024-11-19 06:53:39.413203] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:47.498 [2024-11-19 06:53:39.413220] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:47.498 [2024-11-19 06:53:39.413323] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:47.498 [2024-11-19 06:53:39.413342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:47.498 [2024-11-19 06:53:39.413353] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:47.498 [2024-11-19 06:53:39.413364] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:47.498 [2024-11-19 06:53:39.413374] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:47.498 [2024-11-19 06:53:39.413382] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:47.498 [2024-11-19 06:53:39.413393] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:47.498 [2024-11-19 06:53:39.413400] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:47.498 [2024-11-19 06:53:39.413407] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:47.498 [2024-11-19 06:53:39.413415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.413423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:47.498 [2024-11-19 06:53:39.413431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:29:47.498 [2024-11-19 06:53:39.413439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.413523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.498 [2024-11-19 06:53:39.413531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:47.498 [2024-11-19 06:53:39.413539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:47.498 [2024-11-19 06:53:39.413549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.498 [2024-11-19 06:53:39.413650] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:47.498 [2024-11-19 06:53:39.413661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:47.498 [2024-11-19 06:53:39.413670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:47.498 [2024-11-19 06:53:39.413679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.498 [2024-11-19 06:53:39.413687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:47.498 [2024-11-19 06:53:39.413694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:47.498 [2024-11-19 06:53:39.413702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:47.498 [2024-11-19 06:53:39.413709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:47.498 [2024-11-19 06:53:39.413718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:47.498 [2024-11-19 06:53:39.413724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:47.498 [2024-11-19 06:53:39.413731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:47.498 [2024-11-19 06:53:39.413740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:47.498 [2024-11-19 06:53:39.413747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:47.498 [2024-11-19 06:53:39.413754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:47.498 [2024-11-19 06:53:39.413762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:47.498 [2024-11-19 06:53:39.413769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.498 [2024-11-19 06:53:39.413777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:47.498 [2024-11-19 06:53:39.413789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:47.499 [2024-11-19 06:53:39.413795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.499 [2024-11-19 06:53:39.413802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:47.499 [2024-11-19 06:53:39.413809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:47.499 [2024-11-19 06:53:39.413816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.499 [2024-11-19 06:53:39.413823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:47.499 [2024-11-19 06:53:39.413830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:47.499 [2024-11-19 06:53:39.413837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.499 [2024-11-19 06:53:39.413843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:47.499 [2024-11-19 06:53:39.413850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:47.499 [2024-11-19 06:53:39.413856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.499 [2024-11-19 06:53:39.413862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:47.499 [2024-11-19 06:53:39.413869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:47.499 [2024-11-19 06:53:39.413875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.499 [2024-11-19 06:53:39.413881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:47.499 [2024-11-19 06:53:39.413887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:47.499 [2024-11-19 06:53:39.413894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:47.499 [2024-11-19 06:53:39.413900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:47.499 [2024-11-19 06:53:39.413907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:47.499 [2024-11-19 06:53:39.413913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:47.499 [2024-11-19 06:53:39.413919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:47.499 [2024-11-19 06:53:39.413941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:47.499 [2024-11-19 06:53:39.413948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.499 [2024-11-19 06:53:39.413955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:47.499 [2024-11-19 06:53:39.413961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:47.499 [2024-11-19 06:53:39.413968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.499 [2024-11-19 06:53:39.413977] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:47.499 [2024-11-19 06:53:39.413985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:47.499 [2024-11-19 06:53:39.413993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:47.499 [2024-11-19 06:53:39.414001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.499 [2024-11-19 06:53:39.414009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:47.499 [2024-11-19 06:53:39.414016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:47.499 [2024-11-19 06:53:39.414023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:47.499 [2024-11-19 06:53:39.414030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:47.499 [2024-11-19 06:53:39.414037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:47.499 [2024-11-19 06:53:39.414044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:47.499 [2024-11-19 06:53:39.414053] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:47.499 [2024-11-19 06:53:39.414065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.499 [2024-11-19 06:53:39.414076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:47.499 [2024-11-19 06:53:39.414083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:47.499 [2024-11-19 06:53:39.414090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:47.499 [2024-11-19 06:53:39.414097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:47.499 [2024-11-19 06:53:39.414104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:47.499 [2024-11-19 06:53:39.414111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:47.499 [2024-11-19 06:53:39.414118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:47.499 [2024-11-19 06:53:39.414125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:47.499 [2024-11-19 06:53:39.414133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:47.499 [2024-11-19 06:53:39.414140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:47.499 [2024-11-19 06:53:39.414147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:47.499 [2024-11-19 06:53:39.414154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:47.499 [2024-11-19 06:53:39.414161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:47.499 [2024-11-19 06:53:39.414168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:47.499 [2024-11-19 06:53:39.414175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:47.499 [2024-11-19 06:53:39.414184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.499 [2024-11-19 06:53:39.414192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:47.499 [2024-11-19 06:53:39.414200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:47.499 [2024-11-19 06:53:39.414207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:47.499 [2024-11-19 06:53:39.414215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:47.499 [2024-11-19 06:53:39.414224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.499 [2024-11-19 06:53:39.414233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:47.499 [2024-11-19 06:53:39.414240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:29:47.499 [2024-11-19 06:53:39.414248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.761 [2024-11-19 06:53:39.441885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.441945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:47.762 [2024-11-19 06:53:39.441957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.596 ms 00:29:47.762 [2024-11-19 06:53:39.441966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.442050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.442059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:47.762 [2024-11-19 06:53:39.442069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:47.762 [2024-11-19 06:53:39.442079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.493636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.493703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:47.762 [2024-11-19 06:53:39.493717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.499 ms 00:29:47.762 [2024-11-19 06:53:39.493726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.493780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.493791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:47.762 [2024-11-19 06:53:39.493800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:47.762 [2024-11-19 06:53:39.493808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.493939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.493952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:47.762 [2024-11-19 06:53:39.493961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:47.762 [2024-11-19 06:53:39.493970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.494098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.494111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:47.762 [2024-11-19 06:53:39.494120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:29:47.762 [2024-11-19 06:53:39.494128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.509675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.509724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:47.762 [2024-11-19 06:53:39.509736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.526 ms 00:29:47.762 [2024-11-19 06:53:39.509744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.509896] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:47.762 [2024-11-19 06:53:39.509910] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:47.762 [2024-11-19 06:53:39.509946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.509958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:47.762 [2024-11-19 06:53:39.509968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:47.762 [2024-11-19 06:53:39.509975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.522270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.522308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:47.762 [2024-11-19 06:53:39.522319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.274 ms 00:29:47.762 [2024-11-19 06:53:39.522329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.522458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.522469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:47.762 [2024-11-19 06:53:39.522477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:47.762 [2024-11-19 06:53:39.522490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.522540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.522550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:47.762 [2024-11-19 06:53:39.522558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:47.762 [2024-11-19 06:53:39.522567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.523187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.523214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:47.762 [2024-11-19 06:53:39.523224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:29:47.762 [2024-11-19 06:53:39.523232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.523249] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:47.762 [2024-11-19 06:53:39.523262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.523270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:47.762 [2024-11-19 06:53:39.523279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:47.762 [2024-11-19 06:53:39.523287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.536002] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:47.762 [2024-11-19 06:53:39.536161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.536172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:47.762 [2024-11-19 06:53:39.536182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.856 ms 00:29:47.762 [2024-11-19 06:53:39.536190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.538377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.538410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:47.762 [2024-11-19 06:53:39.538420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.165 ms 00:29:47.762 [2024-11-19 06:53:39.538428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.538517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.538527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:47.762 [2024-11-19 06:53:39.538536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:47.762 [2024-11-19 06:53:39.538544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.538568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.538577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:47.762 [2024-11-19 06:53:39.538588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:47.762 [2024-11-19 06:53:39.538596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.538627] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:47.762 [2024-11-19 06:53:39.538637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.538645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:47.762 [2024-11-19 06:53:39.538652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:47.762 [2024-11-19 06:53:39.538659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.762 [2024-11-19 06:53:39.564933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.762 [2024-11-19 06:53:39.564985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:47.762 [2024-11-19 06:53:39.564997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.252 ms 00:29:47.762 [2024-11-19 06:53:39.565005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.763 [2024-11-19 06:53:39.565091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.763 [2024-11-19 06:53:39.565102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:47.763 [2024-11-19 06:53:39.565110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:47.763 [2024-11-19 06:53:39.565119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.763 [2024-11-19 06:53:39.566300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 159.726 ms, result 0 00:29:49.150  [2024-11-19T06:53:42.021Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-19T06:53:42.965Z] Copying: 32/1024 [MB] (17 MBps) [2024-11-19T06:53:43.908Z] Copying: 46/1024 [MB] (13 MBps) [2024-11-19T06:53:44.849Z] Copying: 59/1024 [MB] (13 MBps) [2024-11-19T06:53:45.792Z] Copying: 87/1024 [MB] (27 MBps) [2024-11-19T06:53:46.817Z] Copying: 107/1024 [MB] (20 MBps) [2024-11-19T06:53:47.824Z] Copying: 124/1024 [MB] (16 MBps) [2024-11-19T06:53:48.766Z] Copying: 147/1024 [MB] (23 MBps) [2024-11-19T06:53:50.156Z] Copying: 167/1024 [MB] (19 MBps) [2024-11-19T06:53:51.098Z] Copying: 193/1024 [MB] (26 MBps) [2024-11-19T06:53:52.044Z] Copying: 224/1024 [MB] (30 MBps) [2024-11-19T06:53:52.984Z] Copying: 242/1024 [MB] (18 MBps) [2024-11-19T06:53:53.926Z] Copying: 254/1024 [MB] (11 MBps) [2024-11-19T06:53:54.868Z] Copying: 276/1024 [MB] (22 MBps) [2024-11-19T06:53:55.810Z] Copying: 292/1024 [MB] (15 MBps) [2024-11-19T06:53:57.195Z] Copying: 309/1024 [MB] (16 MBps) [2024-11-19T06:53:57.767Z] Copying: 322/1024 [MB] (12 MBps) [2024-11-19T06:53:59.157Z] Copying: 333/1024 [MB] (10 MBps) [2024-11-19T06:54:00.100Z] Copying: 343/1024 [MB] (10 MBps) [2024-11-19T06:54:01.044Z] Copying: 353/1024 [MB] (10 MBps) [2024-11-19T06:54:01.991Z] Copying: 363/1024 [MB] (10 MBps) [2024-11-19T06:54:02.938Z] Copying: 374/1024 [MB] (10 MBps) [2024-11-19T06:54:03.882Z] Copying: 384/1024 [MB] (10 MBps) [2024-11-19T06:54:04.828Z] Copying: 395/1024 [MB] (10 MBps) [2024-11-19T06:54:05.769Z] Copying: 406/1024 [MB] (11 MBps) [2024-11-19T06:54:07.156Z] Copying: 436/1024 [MB] (30 MBps) [2024-11-19T06:54:08.101Z] Copying: 447/1024 [MB] (10 MBps) [2024-11-19T06:54:09.045Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-19T06:54:09.991Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-19T06:54:10.937Z] Copying: 480/1024 [MB] (10 MBps) [2024-11-19T06:54:11.882Z] Copying: 497/1024 [MB] (17 MBps) [2024-11-19T06:54:12.828Z] Copying: 515/1024 [MB] (17 MBps) [2024-11-19T06:54:13.774Z] Copying: 532/1024 [MB] (16 MBps) [2024-11-19T06:54:15.162Z] Copying: 545/1024 [MB] (13 MBps) [2024-11-19T06:54:16.103Z] Copying: 568/1024 [MB] (22 MBps) [2024-11-19T06:54:17.048Z] Copying: 596/1024 [MB] (27 MBps) [2024-11-19T06:54:17.993Z] Copying: 625/1024 [MB] (28 MBps) [2024-11-19T06:54:18.982Z] Copying: 640/1024 [MB] (14 MBps) [2024-11-19T06:54:19.937Z] Copying: 652/1024 [MB] (11 MBps) [2024-11-19T06:54:20.882Z] Copying: 662/1024 [MB] (10 MBps) [2024-11-19T06:54:21.827Z] Copying: 674/1024 [MB] (11 MBps) [2024-11-19T06:54:22.770Z] Copying: 685/1024 [MB] (11 MBps) [2024-11-19T06:54:24.161Z] Copying: 697/1024 [MB] (11 MBps) [2024-11-19T06:54:25.105Z] Copying: 708/1024 [MB] (10 MBps) [2024-11-19T06:54:26.046Z] Copying: 721/1024 [MB] (13 MBps) [2024-11-19T06:54:26.989Z] Copying: 735/1024 [MB] (14 MBps) [2024-11-19T06:54:27.934Z] Copying: 767/1024 [MB] (31 MBps) [2024-11-19T06:54:28.878Z] Copying: 778/1024 [MB] (10 MBps) [2024-11-19T06:54:29.822Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-19T06:54:30.766Z] Copying: 801/1024 [MB] (11 MBps) [2024-11-19T06:54:32.152Z] Copying: 815/1024 [MB] (14 MBps) [2024-11-19T06:54:33.098Z] Copying: 827/1024 [MB] (11 MBps) [2024-11-19T06:54:34.043Z] Copying: 839/1024 [MB] (12 MBps) [2024-11-19T06:54:34.987Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-19T06:54:35.933Z] Copying: 861/1024 [MB] (10 MBps) [2024-11-19T06:54:36.875Z] Copying: 872/1024 [MB] (11 MBps) [2024-11-19T06:54:37.821Z] Copying: 899/1024 [MB] (27 MBps) [2024-11-19T06:54:38.756Z] Copying: 909/1024 [MB] (10 MBps) [2024-11-19T06:54:40.138Z] Copying: 945/1024 [MB] (35 MBps) [2024-11-19T06:54:41.080Z] Copying: 959/1024 [MB] (14 MBps) [2024-11-19T06:54:42.023Z] Copying: 975/1024 [MB] (15 MBps) [2024-11-19T06:54:42.967Z] Copying: 988/1024 [MB] (13 MBps) [2024-11-19T06:54:43.913Z] Copying: 1004/1024 [MB] (15 MBps) [2024-11-19T06:54:44.485Z] Copying: 1015/1024 [MB] (11 MBps) [2024-11-19T06:54:44.747Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 06:54:44.647022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.818 [2024-11-19 06:54:44.647099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:52.818 [2024-11-19 06:54:44.647116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:52.818 [2024-11-19 06:54:44.647126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.818 [2024-11-19 06:54:44.647151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:52.818 [2024-11-19 06:54:44.650569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.818 [2024-11-19 06:54:44.650616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:52.818 [2024-11-19 06:54:44.650628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.400 ms 00:30:52.818 [2024-11-19 06:54:44.650638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.818 [2024-11-19 06:54:44.650887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.818 [2024-11-19 06:54:44.650900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:52.818 [2024-11-19 06:54:44.650910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:30:52.819 [2024-11-19 06:54:44.650918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.819 [2024-11-19 06:54:44.650962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.819 [2024-11-19 06:54:44.650977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:52.819 [2024-11-19 06:54:44.650986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:52.819 [2024-11-19 06:54:44.650994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.819 [2024-11-19 06:54:44.651154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.819 [2024-11-19 06:54:44.651165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:52.819 [2024-11-19 06:54:44.651174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:30:52.819 [2024-11-19 06:54:44.651182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.819 [2024-11-19 06:54:44.651197] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:52.819 [2024-11-19 06:54:44.651210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:52.819 [2024-11-19 06:54:44.651920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.651942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.651950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.651957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.651965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.651973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.651980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.651988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.651995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:52.820 [2024-11-19 06:54:44.652117] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:52.820 [2024-11-19 06:54:44.652126] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8dd10ba-9214-4021-bd1f-0d68f51e30a0 00:30:52.820 [2024-11-19 06:54:44.652138] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:52.820 [2024-11-19 06:54:44.652146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:52.820 [2024-11-19 06:54:44.652153] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:52.820 [2024-11-19 06:54:44.652163] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:52.820 [2024-11-19 06:54:44.652170] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:52.820 [2024-11-19 06:54:44.652179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:52.820 [2024-11-19 06:54:44.652189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:52.820 [2024-11-19 06:54:44.652197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:52.820 [2024-11-19 06:54:44.652204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:52.820 [2024-11-19 06:54:44.652212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.820 [2024-11-19 06:54:44.652221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:52.820 [2024-11-19 06:54:44.652229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:30:52.820 [2024-11-19 06:54:44.652238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.820 [2024-11-19 06:54:44.667584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.820 [2024-11-19 06:54:44.667639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:52.820 [2024-11-19 06:54:44.667651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.326 ms 00:30:52.820 [2024-11-19 06:54:44.667659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.820 [2024-11-19 06:54:44.668067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.820 [2024-11-19 06:54:44.668087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:52.820 [2024-11-19 06:54:44.668096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:30:52.820 [2024-11-19 06:54:44.668111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.820 [2024-11-19 06:54:44.706887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.820 [2024-11-19 06:54:44.706950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:52.820 [2024-11-19 06:54:44.706963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.820 [2024-11-19 06:54:44.706974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.820 [2024-11-19 06:54:44.707047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.820 [2024-11-19 06:54:44.707059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:52.820 [2024-11-19 06:54:44.707069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.820 [2024-11-19 06:54:44.707084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.820 [2024-11-19 06:54:44.707151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.820 [2024-11-19 06:54:44.707163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:52.820 [2024-11-19 06:54:44.707174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.820 [2024-11-19 06:54:44.707183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.820 [2024-11-19 06:54:44.707202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.820 [2024-11-19 06:54:44.707211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:52.820 [2024-11-19 06:54:44.707221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.820 [2024-11-19 06:54:44.707229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.082 [2024-11-19 06:54:44.790579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.082 [2024-11-19 06:54:44.790636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:53.082 [2024-11-19 06:54:44.790650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.082 [2024-11-19 06:54:44.790658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.082 [2024-11-19 06:54:44.860241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.082 [2024-11-19 06:54:44.860303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:53.082 [2024-11-19 06:54:44.860316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.082 [2024-11-19 06:54:44.860325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.082 [2024-11-19 06:54:44.860411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.082 [2024-11-19 06:54:44.860421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:53.082 [2024-11-19 06:54:44.860430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.082 [2024-11-19 06:54:44.860440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.082 [2024-11-19 06:54:44.860485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.082 [2024-11-19 06:54:44.860496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:53.082 [2024-11-19 06:54:44.860506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.082 [2024-11-19 06:54:44.860514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.082 [2024-11-19 06:54:44.860597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.082 [2024-11-19 06:54:44.860608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:53.082 [2024-11-19 06:54:44.860617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.082 [2024-11-19 06:54:44.860624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.082 [2024-11-19 06:54:44.860652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.082 [2024-11-19 06:54:44.860662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:53.082 [2024-11-19 06:54:44.860670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.082 [2024-11-19 06:54:44.860677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.082 [2024-11-19 06:54:44.860719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.082 [2024-11-19 06:54:44.860733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:53.082 [2024-11-19 06:54:44.860742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.082 [2024-11-19 06:54:44.860750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.082 [2024-11-19 06:54:44.860797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.082 [2024-11-19 06:54:44.860807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:53.082 [2024-11-19 06:54:44.860816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.083 [2024-11-19 06:54:44.860824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.083 [2024-11-19 06:54:44.860996] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 213.905 ms, result 0 00:30:53.655 00:30:53.655 00:30:53.917 06:54:45 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:56.464 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:56.464 06:54:47 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:56.464 [2024-11-19 06:54:47.924685] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:56.464 [2024-11-19 06:54:47.924855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82645 ] 00:30:56.464 [2024-11-19 06:54:48.094089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.464 [2024-11-19 06:54:48.211745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.725 [2024-11-19 06:54:48.488274] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:56.725 [2024-11-19 06:54:48.488349] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:56.725 [2024-11-19 06:54:48.652055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.725 [2024-11-19 06:54:48.652122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:56.725 [2024-11-19 06:54:48.652145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:56.725 [2024-11-19 06:54:48.652156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.725 [2024-11-19 06:54:48.652217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.725 [2024-11-19 06:54:48.652230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:56.725 [2024-11-19 06:54:48.652242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:56.725 [2024-11-19 06:54:48.652250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.725 [2024-11-19 06:54:48.652272] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:56.725 [2024-11-19 06:54:48.653051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:56.725 [2024-11-19 06:54:48.653084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.725 [2024-11-19 06:54:48.653094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:56.725 [2024-11-19 06:54:48.653105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:30:56.725 [2024-11-19 06:54:48.653114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.725 [2024-11-19 06:54:48.653573] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:56.725 [2024-11-19 06:54:48.653640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.725 [2024-11-19 06:54:48.653650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:56.725 [2024-11-19 06:54:48.653665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:30:56.725 [2024-11-19 06:54:48.653674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.725 [2024-11-19 06:54:48.653739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.725 [2024-11-19 06:54:48.653751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:56.725 [2024-11-19 06:54:48.653759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:56.725 [2024-11-19 06:54:48.653768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.725 [2024-11-19 06:54:48.654098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.725 [2024-11-19 06:54:48.654114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:56.725 [2024-11-19 06:54:48.654124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:30:56.725 [2024-11-19 06:54:48.654133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.725 [2024-11-19 06:54:48.654211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.725 [2024-11-19 06:54:48.654231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:56.725 [2024-11-19 06:54:48.654240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:56.725 [2024-11-19 06:54:48.654248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.725 [2024-11-19 06:54:48.654272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.725 [2024-11-19 06:54:48.654281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:56.725 [2024-11-19 06:54:48.654291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:56.725 [2024-11-19 06:54:48.654303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.725 [2024-11-19 06:54:48.654327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:56.988 [2024-11-19 06:54:48.659312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.988 [2024-11-19 06:54:48.659360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:56.988 [2024-11-19 06:54:48.659371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.991 ms 00:30:56.988 [2024-11-19 06:54:48.659380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.988 [2024-11-19 06:54:48.659417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.988 [2024-11-19 06:54:48.659433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:56.988 [2024-11-19 06:54:48.659442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:56.988 [2024-11-19 06:54:48.659451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.988 [2024-11-19 06:54:48.659509] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:56.988 [2024-11-19 06:54:48.659535] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:56.988 [2024-11-19 06:54:48.659588] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:56.988 [2024-11-19 06:54:48.659607] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:56.988 [2024-11-19 06:54:48.659717] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:56.988 [2024-11-19 06:54:48.659729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:56.988 [2024-11-19 06:54:48.659742] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:56.988 [2024-11-19 06:54:48.659756] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:56.988 [2024-11-19 06:54:48.659766] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:56.988 [2024-11-19 06:54:48.659775] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:56.988 [2024-11-19 06:54:48.659788] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:56.988 [2024-11-19 06:54:48.659796] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:56.988 [2024-11-19 06:54:48.659804] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:56.988 [2024-11-19 06:54:48.659813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.988 [2024-11-19 06:54:48.659821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:56.988 [2024-11-19 06:54:48.659830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:30:56.988 [2024-11-19 06:54:48.659839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.988 [2024-11-19 06:54:48.659943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.988 [2024-11-19 06:54:48.659955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:56.988 [2024-11-19 06:54:48.659964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:30:56.988 [2024-11-19 06:54:48.659975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.988 [2024-11-19 06:54:48.660079] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:56.988 [2024-11-19 06:54:48.660099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:56.988 [2024-11-19 06:54:48.660109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:56.988 [2024-11-19 06:54:48.660117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:56.988 [2024-11-19 06:54:48.660132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:56.988 [2024-11-19 06:54:48.660147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:56.988 [2024-11-19 06:54:48.660156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:56.988 [2024-11-19 06:54:48.660171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:56.988 [2024-11-19 06:54:48.660186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:56.988 [2024-11-19 06:54:48.660193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:56.988 [2024-11-19 06:54:48.660200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:56.988 [2024-11-19 06:54:48.660208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:56.988 [2024-11-19 06:54:48.660215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:56.988 [2024-11-19 06:54:48.660236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:56.988 [2024-11-19 06:54:48.660243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:56.988 [2024-11-19 06:54:48.660256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.988 [2024-11-19 06:54:48.660269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:56.988 [2024-11-19 06:54:48.660276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.988 [2024-11-19 06:54:48.660290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:56.988 [2024-11-19 06:54:48.660297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.988 [2024-11-19 06:54:48.660310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:56.988 [2024-11-19 06:54:48.660316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.988 [2024-11-19 06:54:48.660330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:56.988 [2024-11-19 06:54:48.660337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:56.988 [2024-11-19 06:54:48.660350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:56.988 [2024-11-19 06:54:48.660358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:56.988 [2024-11-19 06:54:48.660365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:56.988 [2024-11-19 06:54:48.660373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:56.988 [2024-11-19 06:54:48.660380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:56.988 [2024-11-19 06:54:48.660388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:56.988 [2024-11-19 06:54:48.660404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:56.988 [2024-11-19 06:54:48.660410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660420] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:56.988 [2024-11-19 06:54:48.660430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:56.988 [2024-11-19 06:54:48.660437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:56.988 [2024-11-19 06:54:48.660445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.988 [2024-11-19 06:54:48.660453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:56.988 [2024-11-19 06:54:48.660462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:56.988 [2024-11-19 06:54:48.660471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:56.988 [2024-11-19 06:54:48.660478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:56.988 [2024-11-19 06:54:48.660485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:56.988 [2024-11-19 06:54:48.660492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:56.988 [2024-11-19 06:54:48.660500] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:56.988 [2024-11-19 06:54:48.660512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.988 [2024-11-19 06:54:48.660522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:56.988 [2024-11-19 06:54:48.660530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:56.989 [2024-11-19 06:54:48.660540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:56.989 [2024-11-19 06:54:48.660548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:56.989 [2024-11-19 06:54:48.660557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:56.989 [2024-11-19 06:54:48.660564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:56.989 [2024-11-19 06:54:48.660571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:56.989 [2024-11-19 06:54:48.660578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:56.989 [2024-11-19 06:54:48.660585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:56.989 [2024-11-19 06:54:48.660593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:56.989 [2024-11-19 06:54:48.660600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:56.989 [2024-11-19 06:54:48.660609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:56.989 [2024-11-19 06:54:48.660618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:56.989 [2024-11-19 06:54:48.660625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:56.989 [2024-11-19 06:54:48.660632] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:56.989 [2024-11-19 06:54:48.660641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.989 [2024-11-19 06:54:48.660649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:56.989 [2024-11-19 06:54:48.660656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:56.989 [2024-11-19 06:54:48.660663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:56.989 [2024-11-19 06:54:48.660670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:56.989 [2024-11-19 06:54:48.660679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.660688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:56.989 [2024-11-19 06:54:48.660697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:30:56.989 [2024-11-19 06:54:48.660704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.692905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.692961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:56.989 [2024-11-19 06:54:48.692974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.159 ms 00:30:56.989 [2024-11-19 06:54:48.692984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.693071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.693079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:56.989 [2024-11-19 06:54:48.693088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:56.989 [2024-11-19 06:54:48.693101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.745323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.745557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:56.989 [2024-11-19 06:54:48.745581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.163 ms 00:30:56.989 [2024-11-19 06:54:48.745592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.745654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.745666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:56.989 [2024-11-19 06:54:48.745676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:56.989 [2024-11-19 06:54:48.745685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.745827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.745840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:56.989 [2024-11-19 06:54:48.745850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:56.989 [2024-11-19 06:54:48.745860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.746035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.746050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:56.989 [2024-11-19 06:54:48.746062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:30:56.989 [2024-11-19 06:54:48.746072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.764399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.764591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:56.989 [2024-11-19 06:54:48.764611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.304 ms 00:30:56.989 [2024-11-19 06:54:48.764620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.764780] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:56.989 [2024-11-19 06:54:48.764798] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:56.989 [2024-11-19 06:54:48.764809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.764821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:56.989 [2024-11-19 06:54:48.764831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:56.989 [2024-11-19 06:54:48.764840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.777191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.777236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:56.989 [2024-11-19 06:54:48.777248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.332 ms 00:30:56.989 [2024-11-19 06:54:48.777256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.777399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.777409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:56.989 [2024-11-19 06:54:48.777420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:30:56.989 [2024-11-19 06:54:48.777435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.777489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.777500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:56.989 [2024-11-19 06:54:48.777509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:56.989 [2024-11-19 06:54:48.777518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.778181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.778198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:56.989 [2024-11-19 06:54:48.778209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:30:56.989 [2024-11-19 06:54:48.778218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.778238] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:56.989 [2024-11-19 06:54:48.778255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.778265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:56.989 [2024-11-19 06:54:48.778274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:56.989 [2024-11-19 06:54:48.778283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.792841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:56.989 [2024-11-19 06:54:48.793034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.793048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:56.989 [2024-11-19 06:54:48.793060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.731 ms 00:30:56.989 [2024-11-19 06:54:48.793069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.795345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.795508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:56.989 [2024-11-19 06:54:48.795526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:30:56.989 [2024-11-19 06:54:48.795536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.795669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.795683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:56.989 [2024-11-19 06:54:48.795693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:56.989 [2024-11-19 06:54:48.795702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.795732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.795741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:56.989 [2024-11-19 06:54:48.795756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:56.989 [2024-11-19 06:54:48.795765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.795805] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:56.989 [2024-11-19 06:54:48.795817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.795826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:56.989 [2024-11-19 06:54:48.795835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:56.989 [2024-11-19 06:54:48.795845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.989 [2024-11-19 06:54:48.823655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.989 [2024-11-19 06:54:48.823714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:56.989 [2024-11-19 06:54:48.823729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.790 ms 00:30:56.990 [2024-11-19 06:54:48.823738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.990 [2024-11-19 06:54:48.823832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.990 [2024-11-19 06:54:48.823843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:56.990 [2024-11-19 06:54:48.823853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:56.990 [2024-11-19 06:54:48.823862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.990 [2024-11-19 06:54:48.825663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 173.055 ms, result 0 00:30:57.941  [2024-11-19T06:54:50.873Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-19T06:54:52.247Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-19T06:54:53.189Z] Copying: 48/1024 [MB] (27 MBps) [2024-11-19T06:54:54.123Z] Copying: 71/1024 [MB] (22 MBps) [2024-11-19T06:54:55.059Z] Copying: 94/1024 [MB] (22 MBps) [2024-11-19T06:54:56.004Z] Copying: 136/1024 [MB] (42 MBps) [2024-11-19T06:54:56.945Z] Copying: 149/1024 [MB] (12 MBps) [2024-11-19T06:54:57.878Z] Copying: 173/1024 [MB] (24 MBps) [2024-11-19T06:54:59.260Z] Copying: 209/1024 [MB] (36 MBps) [2024-11-19T06:55:00.192Z] Copying: 228/1024 [MB] (18 MBps) [2024-11-19T06:55:01.126Z] Copying: 256/1024 [MB] (27 MBps) [2024-11-19T06:55:02.061Z] Copying: 280/1024 [MB] (24 MBps) [2024-11-19T06:55:02.996Z] Copying: 325/1024 [MB] (44 MBps) [2024-11-19T06:55:03.940Z] Copying: 364/1024 [MB] (38 MBps) [2024-11-19T06:55:04.886Z] Copying: 379/1024 [MB] (15 MBps) [2024-11-19T06:55:06.264Z] Copying: 390/1024 [MB] (10 MBps) [2024-11-19T06:55:07.196Z] Copying: 413/1024 [MB] (23 MBps) [2024-11-19T06:55:08.132Z] Copying: 443/1024 [MB] (30 MBps) [2024-11-19T06:55:09.075Z] Copying: 484/1024 [MB] (40 MBps) [2024-11-19T06:55:10.037Z] Copying: 498/1024 [MB] (14 MBps) [2024-11-19T06:55:10.979Z] Copying: 511/1024 [MB] (12 MBps) [2024-11-19T06:55:11.923Z] Copying: 522/1024 [MB] (10 MBps) [2024-11-19T06:55:12.865Z] Copying: 535/1024 [MB] (13 MBps) [2024-11-19T06:55:14.246Z] Copying: 549/1024 [MB] (13 MBps) [2024-11-19T06:55:15.183Z] Copying: 562/1024 [MB] (13 MBps) [2024-11-19T06:55:16.122Z] Copying: 594/1024 [MB] (32 MBps) [2024-11-19T06:55:17.056Z] Copying: 610/1024 [MB] (15 MBps) [2024-11-19T06:55:17.997Z] Copying: 639/1024 [MB] (29 MBps) [2024-11-19T06:55:18.951Z] Copying: 650/1024 [MB] (10 MBps) [2024-11-19T06:55:19.907Z] Copying: 663/1024 [MB] (13 MBps) [2024-11-19T06:55:20.850Z] Copying: 676/1024 [MB] (13 MBps) [2024-11-19T06:55:22.243Z] Copying: 693/1024 [MB] (17 MBps) [2024-11-19T06:55:22.877Z] Copying: 714/1024 [MB] (20 MBps) [2024-11-19T06:55:24.264Z] Copying: 726/1024 [MB] (12 MBps) [2024-11-19T06:55:24.838Z] Copying: 741/1024 [MB] (15 MBps) [2024-11-19T06:55:26.220Z] Copying: 759/1024 [MB] (18 MBps) [2024-11-19T06:55:27.163Z] Copying: 784/1024 [MB] (24 MBps) [2024-11-19T06:55:28.108Z] Copying: 794/1024 [MB] (10 MBps) [2024-11-19T06:55:29.052Z] Copying: 809/1024 [MB] (15 MBps) [2024-11-19T06:55:29.996Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-19T06:55:30.943Z] Copying: 835/1024 [MB] (15 MBps) [2024-11-19T06:55:31.890Z] Copying: 851/1024 [MB] (16 MBps) [2024-11-19T06:55:32.836Z] Copying: 868/1024 [MB] (17 MBps) [2024-11-19T06:55:34.227Z] Copying: 879/1024 [MB] (10 MBps) [2024-11-19T06:55:35.170Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-19T06:55:36.114Z] Copying: 920660/1048576 [kB] (10180 kBps) [2024-11-19T06:55:37.060Z] Copying: 909/1024 [MB] (10 MBps) [2024-11-19T06:55:37.999Z] Copying: 941220/1048576 [kB] (10228 kBps) [2024-11-19T06:55:38.943Z] Copying: 939/1024 [MB] (20 MBps) [2024-11-19T06:55:39.882Z] Copying: 955/1024 [MB] (15 MBps) [2024-11-19T06:55:41.260Z] Copying: 965/1024 [MB] (10 MBps) [2024-11-19T06:55:42.203Z] Copying: 1008/1024 [MB] (42 MBps) [2024-11-19T06:55:42.778Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-19T06:55:42.778Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-19 06:55:42.595490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.849 [2024-11-19 06:55:42.595563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:50.849 [2024-11-19 06:55:42.595594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:50.849 [2024-11-19 06:55:42.595604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.849 [2024-11-19 06:55:42.597971] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:50.849 [2024-11-19 06:55:42.604003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.849 [2024-11-19 06:55:42.604048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:50.849 [2024-11-19 06:55:42.604061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.983 ms 00:31:50.850 [2024-11-19 06:55:42.604070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.850 [2024-11-19 06:55:42.614268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.850 [2024-11-19 06:55:42.614322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:50.850 [2024-11-19 06:55:42.614335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.808 ms 00:31:50.850 [2024-11-19 06:55:42.614344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.850 [2024-11-19 06:55:42.614374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.850 [2024-11-19 06:55:42.614384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:50.850 [2024-11-19 06:55:42.614394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:50.850 [2024-11-19 06:55:42.614403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.850 [2024-11-19 06:55:42.614467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.850 [2024-11-19 06:55:42.614477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:50.850 [2024-11-19 06:55:42.614489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:50.850 [2024-11-19 06:55:42.614497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.850 [2024-11-19 06:55:42.614512] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:50.850 [2024-11-19 06:55:42.614524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128256 / 261120 wr_cnt: 1 state: open 00:31:50.850 [2024-11-19 06:55:42.614534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.614989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.615002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.615010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.615018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.615026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.615034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.615043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.615050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.615058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:50.850 [2024-11-19 06:55:42.615066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:50.851 [2024-11-19 06:55:42.615399] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:50.851 [2024-11-19 06:55:42.615408] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8dd10ba-9214-4021-bd1f-0d68f51e30a0 00:31:50.851 [2024-11-19 06:55:42.615416] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128256 00:31:50.851 [2024-11-19 06:55:42.615424] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128288 00:31:50.851 [2024-11-19 06:55:42.615431] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128256 00:31:50.851 [2024-11-19 06:55:42.615439] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:50.851 [2024-11-19 06:55:42.615446] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:50.851 [2024-11-19 06:55:42.615455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:50.851 [2024-11-19 06:55:42.615466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:50.851 [2024-11-19 06:55:42.615474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:50.851 [2024-11-19 06:55:42.615481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:50.851 [2024-11-19 06:55:42.615488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.851 [2024-11-19 06:55:42.615496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:50.851 [2024-11-19 06:55:42.615505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:31:50.851 [2024-11-19 06:55:42.615513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.851 [2024-11-19 06:55:42.629531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.851 [2024-11-19 06:55:42.629715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:50.851 [2024-11-19 06:55:42.629735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.000 ms 00:31:50.851 [2024-11-19 06:55:42.629751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.851 [2024-11-19 06:55:42.630185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.851 [2024-11-19 06:55:42.630203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:50.851 [2024-11-19 06:55:42.630214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:31:50.851 [2024-11-19 06:55:42.630223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.851 [2024-11-19 06:55:42.666278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.851 [2024-11-19 06:55:42.666323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:50.851 [2024-11-19 06:55:42.666339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.851 [2024-11-19 06:55:42.666347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.851 [2024-11-19 06:55:42.666414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.851 [2024-11-19 06:55:42.666423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:50.851 [2024-11-19 06:55:42.666431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.851 [2024-11-19 06:55:42.666439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.851 [2024-11-19 06:55:42.666524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.851 [2024-11-19 06:55:42.666536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:50.851 [2024-11-19 06:55:42.666544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.851 [2024-11-19 06:55:42.666556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.851 [2024-11-19 06:55:42.666573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.851 [2024-11-19 06:55:42.666582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:50.851 [2024-11-19 06:55:42.666590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.851 [2024-11-19 06:55:42.666598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.851 [2024-11-19 06:55:42.750760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.851 [2024-11-19 06:55:42.750815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:50.851 [2024-11-19 06:55:42.750837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.851 [2024-11-19 06:55:42.750846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.113 [2024-11-19 06:55:42.819326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.113 [2024-11-19 06:55:42.819384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:51.113 [2024-11-19 06:55:42.819403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.113 [2024-11-19 06:55:42.819412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.113 [2024-11-19 06:55:42.819477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.113 [2024-11-19 06:55:42.819487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:51.113 [2024-11-19 06:55:42.819497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.113 [2024-11-19 06:55:42.819506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.113 [2024-11-19 06:55:42.819585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.113 [2024-11-19 06:55:42.819597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:51.113 [2024-11-19 06:55:42.819605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.113 [2024-11-19 06:55:42.819614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.113 [2024-11-19 06:55:42.819694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.113 [2024-11-19 06:55:42.819704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:51.113 [2024-11-19 06:55:42.819714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.113 [2024-11-19 06:55:42.819722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.113 [2024-11-19 06:55:42.819752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.113 [2024-11-19 06:55:42.819762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:51.113 [2024-11-19 06:55:42.819770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.113 [2024-11-19 06:55:42.819778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.113 [2024-11-19 06:55:42.819821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.113 [2024-11-19 06:55:42.819831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:51.113 [2024-11-19 06:55:42.819840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.113 [2024-11-19 06:55:42.819848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.113 [2024-11-19 06:55:42.819895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.113 [2024-11-19 06:55:42.819905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:51.113 [2024-11-19 06:55:42.819914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.113 [2024-11-19 06:55:42.819952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.113 [2024-11-19 06:55:42.820089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 226.712 ms, result 0 00:31:52.500 00:31:52.500 00:31:52.500 06:55:44 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:52.500 [2024-11-19 06:55:44.257836] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:52.500 [2024-11-19 06:55:44.258006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83225 ] 00:31:52.500 [2024-11-19 06:55:44.422561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.762 [2024-11-19 06:55:44.544891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.023 [2024-11-19 06:55:44.831909] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:53.023 [2024-11-19 06:55:44.832005] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:53.287 [2024-11-19 06:55:44.992993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.287 [2024-11-19 06:55:44.993045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:53.287 [2024-11-19 06:55:44.993067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:53.287 [2024-11-19 06:55:44.993077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.287 [2024-11-19 06:55:44.993132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.287 [2024-11-19 06:55:44.993143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:53.287 [2024-11-19 06:55:44.993155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:53.287 [2024-11-19 06:55:44.993163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.287 [2024-11-19 06:55:44.993184] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:53.287 [2024-11-19 06:55:44.993958] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:53.287 [2024-11-19 06:55:44.993986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.287 [2024-11-19 06:55:44.993996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:53.287 [2024-11-19 06:55:44.994006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:31:53.287 [2024-11-19 06:55:44.994014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.287 [2024-11-19 06:55:44.994303] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:53.287 [2024-11-19 06:55:44.994336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.287 [2024-11-19 06:55:44.994345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:53.287 [2024-11-19 06:55:44.994359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:53.287 [2024-11-19 06:55:44.994367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.287 [2024-11-19 06:55:44.994420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.287 [2024-11-19 06:55:44.994431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:53.288 [2024-11-19 06:55:44.994439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:53.288 [2024-11-19 06:55:44.994446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.288 [2024-11-19 06:55:44.994755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.288 [2024-11-19 06:55:44.994770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:53.288 [2024-11-19 06:55:44.994780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:31:53.288 [2024-11-19 06:55:44.994788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.288 [2024-11-19 06:55:44.994857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.288 [2024-11-19 06:55:44.994866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:53.288 [2024-11-19 06:55:44.994875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:53.288 [2024-11-19 06:55:44.994882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.288 [2024-11-19 06:55:44.994906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.288 [2024-11-19 06:55:44.994915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:53.288 [2024-11-19 06:55:44.994951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:53.288 [2024-11-19 06:55:44.994964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.288 [2024-11-19 06:55:44.994983] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:53.288 [2024-11-19 06:55:44.999295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.288 [2024-11-19 06:55:44.999338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:53.288 [2024-11-19 06:55:44.999348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.318 ms 00:31:53.288 [2024-11-19 06:55:44.999357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.288 [2024-11-19 06:55:44.999397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.288 [2024-11-19 06:55:44.999406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:53.288 [2024-11-19 06:55:44.999414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:53.288 [2024-11-19 06:55:44.999421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.288 [2024-11-19 06:55:44.999475] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:53.288 [2024-11-19 06:55:44.999500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:53.288 [2024-11-19 06:55:44.999538] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:53.288 [2024-11-19 06:55:44.999554] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:53.288 [2024-11-19 06:55:44.999675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:53.288 [2024-11-19 06:55:44.999686] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:53.288 [2024-11-19 06:55:44.999698] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:53.288 [2024-11-19 06:55:44.999709] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:53.288 [2024-11-19 06:55:44.999717] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:53.288 [2024-11-19 06:55:44.999725] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:53.288 [2024-11-19 06:55:44.999736] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:53.288 [2024-11-19 06:55:44.999745] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:53.288 [2024-11-19 06:55:44.999753] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:53.288 [2024-11-19 06:55:44.999761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.288 [2024-11-19 06:55:44.999768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:53.288 [2024-11-19 06:55:44.999775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:31:53.288 [2024-11-19 06:55:44.999783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.288 [2024-11-19 06:55:44.999866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.288 [2024-11-19 06:55:44.999874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:53.288 [2024-11-19 06:55:44.999882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:53.288 [2024-11-19 06:55:44.999892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.288 [2024-11-19 06:55:45.000020] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:53.288 [2024-11-19 06:55:45.000033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:53.288 [2024-11-19 06:55:45.000043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:53.288 [2024-11-19 06:55:45.000052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:53.288 [2024-11-19 06:55:45.000067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:53.288 [2024-11-19 06:55:45.000081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:53.288 [2024-11-19 06:55:45.000089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:53.288 [2024-11-19 06:55:45.000103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:53.288 [2024-11-19 06:55:45.000109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:53.288 [2024-11-19 06:55:45.000116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:53.288 [2024-11-19 06:55:45.000123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:53.288 [2024-11-19 06:55:45.000133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:53.288 [2024-11-19 06:55:45.000140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:53.288 [2024-11-19 06:55:45.000161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:53.288 [2024-11-19 06:55:45.000167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:53.288 [2024-11-19 06:55:45.000180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.288 [2024-11-19 06:55:45.000194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:53.288 [2024-11-19 06:55:45.000200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.288 [2024-11-19 06:55:45.000213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:53.288 [2024-11-19 06:55:45.000220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.288 [2024-11-19 06:55:45.000235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:53.288 [2024-11-19 06:55:45.000241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.288 [2024-11-19 06:55:45.000255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:53.288 [2024-11-19 06:55:45.000262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:53.288 [2024-11-19 06:55:45.000275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:53.288 [2024-11-19 06:55:45.000281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:53.288 [2024-11-19 06:55:45.000288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:53.288 [2024-11-19 06:55:45.000295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:53.288 [2024-11-19 06:55:45.000301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:53.288 [2024-11-19 06:55:45.000307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:53.288 [2024-11-19 06:55:45.000320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:53.288 [2024-11-19 06:55:45.000326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000332] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:53.288 [2024-11-19 06:55:45.000340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:53.288 [2024-11-19 06:55:45.000348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:53.288 [2024-11-19 06:55:45.000356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.288 [2024-11-19 06:55:45.000364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:53.288 [2024-11-19 06:55:45.000371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:53.288 [2024-11-19 06:55:45.000377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:53.288 [2024-11-19 06:55:45.000384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:53.288 [2024-11-19 06:55:45.000391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:53.288 [2024-11-19 06:55:45.000397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:53.288 [2024-11-19 06:55:45.000405] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:53.288 [2024-11-19 06:55:45.000416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:53.288 [2024-11-19 06:55:45.000424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:53.288 [2024-11-19 06:55:45.000432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:53.289 [2024-11-19 06:55:45.000439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:53.289 [2024-11-19 06:55:45.000446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:53.289 [2024-11-19 06:55:45.000453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:53.289 [2024-11-19 06:55:45.000459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:53.289 [2024-11-19 06:55:45.000466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:53.289 [2024-11-19 06:55:45.000472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:53.289 [2024-11-19 06:55:45.000479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:53.289 [2024-11-19 06:55:45.000487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:53.289 [2024-11-19 06:55:45.000495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:53.289 [2024-11-19 06:55:45.000502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:53.289 [2024-11-19 06:55:45.000509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:53.289 [2024-11-19 06:55:45.000516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:53.289 [2024-11-19 06:55:45.000524] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:53.289 [2024-11-19 06:55:45.000532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:53.289 [2024-11-19 06:55:45.000540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:53.289 [2024-11-19 06:55:45.000547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:53.289 [2024-11-19 06:55:45.000554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:53.289 [2024-11-19 06:55:45.000561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:53.289 [2024-11-19 06:55:45.000568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.000575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:53.289 [2024-11-19 06:55:45.000583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:31:53.289 [2024-11-19 06:55:45.000592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.028481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.028520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:53.289 [2024-11-19 06:55:45.028531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.846 ms 00:31:53.289 [2024-11-19 06:55:45.028540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.028636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.028646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:53.289 [2024-11-19 06:55:45.028655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:53.289 [2024-11-19 06:55:45.028666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.083330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.083383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:53.289 [2024-11-19 06:55:45.083397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.607 ms 00:31:53.289 [2024-11-19 06:55:45.083407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.083461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.083472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:53.289 [2024-11-19 06:55:45.083482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:53.289 [2024-11-19 06:55:45.083490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.083631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.083643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:53.289 [2024-11-19 06:55:45.083652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:53.289 [2024-11-19 06:55:45.083661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.083794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.083807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:53.289 [2024-11-19 06:55:45.083816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:31:53.289 [2024-11-19 06:55:45.083825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.099492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.099537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:53.289 [2024-11-19 06:55:45.099549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.646 ms 00:31:53.289 [2024-11-19 06:55:45.099557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.099732] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:53.289 [2024-11-19 06:55:45.099747] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:53.289 [2024-11-19 06:55:45.099757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.099768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:53.289 [2024-11-19 06:55:45.099777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:53.289 [2024-11-19 06:55:45.099786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.112122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.112161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:53.289 [2024-11-19 06:55:45.112172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.316 ms 00:31:53.289 [2024-11-19 06:55:45.112180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.112300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.112310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:53.289 [2024-11-19 06:55:45.112318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:31:53.289 [2024-11-19 06:55:45.112331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.112384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.112394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:53.289 [2024-11-19 06:55:45.112402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:53.289 [2024-11-19 06:55:45.112410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.113030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.113064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:53.289 [2024-11-19 06:55:45.113074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:31:53.289 [2024-11-19 06:55:45.113082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.113101] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:53.289 [2024-11-19 06:55:45.113115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.113125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:53.289 [2024-11-19 06:55:45.113134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:53.289 [2024-11-19 06:55:45.113142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.125711] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:53.289 [2024-11-19 06:55:45.125885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.125896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:53.289 [2024-11-19 06:55:45.125906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.722 ms 00:31:53.289 [2024-11-19 06:55:45.125915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.128244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.128280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:53.289 [2024-11-19 06:55:45.128291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:31:53.289 [2024-11-19 06:55:45.128300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.128377] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:53.289 [2024-11-19 06:55:45.128859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.128869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:53.289 [2024-11-19 06:55:45.128879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:31:53.289 [2024-11-19 06:55:45.128888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.128914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.128952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:53.289 [2024-11-19 06:55:45.128961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:53.289 [2024-11-19 06:55:45.128970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.129004] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:53.289 [2024-11-19 06:55:45.129014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.129022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:53.289 [2024-11-19 06:55:45.129030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:53.289 [2024-11-19 06:55:45.129037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.289 [2024-11-19 06:55:45.155831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.289 [2024-11-19 06:55:45.156027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:53.289 [2024-11-19 06:55:45.156049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.775 ms 00:31:53.290 [2024-11-19 06:55:45.156060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.290 [2024-11-19 06:55:45.156141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.290 [2024-11-19 06:55:45.156152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:53.290 [2024-11-19 06:55:45.156161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:53.290 [2024-11-19 06:55:45.156170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.290 [2024-11-19 06:55:45.157386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.909 ms, result 0 00:31:54.680  [2024-11-19T06:55:47.551Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-19T06:55:48.498Z] Copying: 39/1024 [MB] (28 MBps) [2024-11-19T06:55:49.444Z] Copying: 63/1024 [MB] (23 MBps) [2024-11-19T06:55:50.478Z] Copying: 81/1024 [MB] (18 MBps) [2024-11-19T06:55:51.420Z] Copying: 96/1024 [MB] (15 MBps) [2024-11-19T06:55:52.367Z] Copying: 118/1024 [MB] (21 MBps) [2024-11-19T06:55:53.758Z] Copying: 136/1024 [MB] (18 MBps) [2024-11-19T06:55:54.700Z] Copying: 157/1024 [MB] (21 MBps) [2024-11-19T06:55:55.643Z] Copying: 173/1024 [MB] (16 MBps) [2024-11-19T06:55:56.588Z] Copying: 190/1024 [MB] (16 MBps) [2024-11-19T06:55:57.537Z] Copying: 205/1024 [MB] (15 MBps) [2024-11-19T06:55:58.480Z] Copying: 226/1024 [MB] (20 MBps) [2024-11-19T06:55:59.435Z] Copying: 243/1024 [MB] (17 MBps) [2024-11-19T06:56:00.381Z] Copying: 264/1024 [MB] (20 MBps) [2024-11-19T06:56:01.771Z] Copying: 274/1024 [MB] (10 MBps) [2024-11-19T06:56:02.716Z] Copying: 285/1024 [MB] (10 MBps) [2024-11-19T06:56:03.661Z] Copying: 302/1024 [MB] (17 MBps) [2024-11-19T06:56:04.605Z] Copying: 313/1024 [MB] (11 MBps) [2024-11-19T06:56:05.552Z] Copying: 332/1024 [MB] (18 MBps) [2024-11-19T06:56:06.492Z] Copying: 346/1024 [MB] (13 MBps) [2024-11-19T06:56:07.436Z] Copying: 378/1024 [MB] (31 MBps) [2024-11-19T06:56:08.375Z] Copying: 392/1024 [MB] (14 MBps) [2024-11-19T06:56:09.762Z] Copying: 427/1024 [MB] (34 MBps) [2024-11-19T06:56:10.705Z] Copying: 447/1024 [MB] (20 MBps) [2024-11-19T06:56:11.649Z] Copying: 466/1024 [MB] (19 MBps) [2024-11-19T06:56:12.593Z] Copying: 490/1024 [MB] (23 MBps) [2024-11-19T06:56:13.536Z] Copying: 516/1024 [MB] (26 MBps) [2024-11-19T06:56:14.481Z] Copying: 541/1024 [MB] (24 MBps) [2024-11-19T06:56:15.426Z] Copying: 562/1024 [MB] (21 MBps) [2024-11-19T06:56:16.369Z] Copying: 587/1024 [MB] (24 MBps) [2024-11-19T06:56:17.758Z] Copying: 606/1024 [MB] (19 MBps) [2024-11-19T06:56:18.702Z] Copying: 617/1024 [MB] (10 MBps) [2024-11-19T06:56:19.374Z] Copying: 632/1024 [MB] (15 MBps) [2024-11-19T06:56:20.761Z] Copying: 643/1024 [MB] (10 MBps) [2024-11-19T06:56:21.705Z] Copying: 655/1024 [MB] (11 MBps) [2024-11-19T06:56:22.649Z] Copying: 666/1024 [MB] (11 MBps) [2024-11-19T06:56:23.594Z] Copying: 678/1024 [MB] (11 MBps) [2024-11-19T06:56:24.540Z] Copying: 690/1024 [MB] (11 MBps) [2024-11-19T06:56:25.485Z] Copying: 701/1024 [MB] (11 MBps) [2024-11-19T06:56:26.431Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-19T06:56:27.376Z] Copying: 724/1024 [MB] (11 MBps) [2024-11-19T06:56:28.756Z] Copying: 736/1024 [MB] (11 MBps) [2024-11-19T06:56:29.689Z] Copying: 754/1024 [MB] (18 MBps) [2024-11-19T06:56:30.623Z] Copying: 803/1024 [MB] (49 MBps) [2024-11-19T06:56:31.557Z] Copying: 852/1024 [MB] (48 MBps) [2024-11-19T06:56:32.490Z] Copying: 903/1024 [MB] (51 MBps) [2024-11-19T06:56:33.424Z] Copying: 952/1024 [MB] (48 MBps) [2024-11-19T06:56:33.991Z] Copying: 1003/1024 [MB] (50 MBps) [2024-11-19T06:56:34.250Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-19 06:56:34.053855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.321 [2024-11-19 06:56:34.053939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:42.321 [2024-11-19 06:56:34.053955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:42.321 [2024-11-19 06:56:34.053964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.321 [2024-11-19 06:56:34.053987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:42.321 [2024-11-19 06:56:34.057130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.321 [2024-11-19 06:56:34.057225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:42.321 [2024-11-19 06:56:34.057285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:32:42.321 [2024-11-19 06:56:34.057309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.321 [2024-11-19 06:56:34.057548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.321 [2024-11-19 06:56:34.057581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:42.321 [2024-11-19 06:56:34.057603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:32:42.321 [2024-11-19 06:56:34.057659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.321 [2024-11-19 06:56:34.057703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.321 [2024-11-19 06:56:34.057724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:42.321 [2024-11-19 06:56:34.057793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:42.321 [2024-11-19 06:56:34.057817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.321 [2024-11-19 06:56:34.057881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.321 [2024-11-19 06:56:34.057904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:42.321 [2024-11-19 06:56:34.057997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:42.321 [2024-11-19 06:56:34.058016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.321 [2024-11-19 06:56:34.058082] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:42.321 [2024-11-19 06:56:34.058111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:42.321 [2024-11-19 06:56:34.058144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.058987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.059015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.059085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.059117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.059146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.059174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.059203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.059266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:42.321 [2024-11-19 06:56:34.059295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.059991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:42.322 [2024-11-19 06:56:34.060635] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:42.322 [2024-11-19 06:56:34.060643] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8dd10ba-9214-4021-bd1f-0d68f51e30a0 00:32:42.322 [2024-11-19 06:56:34.060651] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:42.322 [2024-11-19 06:56:34.060659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2848 00:32:42.322 [2024-11-19 06:56:34.060667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2816 00:32:42.322 [2024-11-19 06:56:34.060675] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0114 00:32:42.322 [2024-11-19 06:56:34.060682] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:42.322 [2024-11-19 06:56:34.060692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:42.322 [2024-11-19 06:56:34.060700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:42.322 [2024-11-19 06:56:34.060707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:42.322 [2024-11-19 06:56:34.060713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:42.322 [2024-11-19 06:56:34.060721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.322 [2024-11-19 06:56:34.060729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:42.322 [2024-11-19 06:56:34.060736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 00:32:42.322 [2024-11-19 06:56:34.060743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.322 [2024-11-19 06:56:34.076052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.322 [2024-11-19 06:56:34.076158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:42.322 [2024-11-19 06:56:34.076223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.292 ms 00:32:42.322 [2024-11-19 06:56:34.076279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.322 [2024-11-19 06:56:34.076646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.322 [2024-11-19 06:56:34.076903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:42.322 [2024-11-19 06:56:34.077001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:32:42.322 [2024-11-19 06:56:34.077028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.322 [2024-11-19 06:56:34.112084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.322 [2024-11-19 06:56:34.112196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:42.322 [2024-11-19 06:56:34.112249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.322 [2024-11-19 06:56:34.112275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.322 [2024-11-19 06:56:34.112348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.322 [2024-11-19 06:56:34.112399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:42.322 [2024-11-19 06:56:34.112425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.322 [2024-11-19 06:56:34.112444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.322 [2024-11-19 06:56:34.112528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.322 [2024-11-19 06:56:34.112555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:42.322 [2024-11-19 06:56:34.112675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.322 [2024-11-19 06:56:34.112698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.322 [2024-11-19 06:56:34.112727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.322 [2024-11-19 06:56:34.112748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:42.322 [2024-11-19 06:56:34.112767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.322 [2024-11-19 06:56:34.112785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.322 [2024-11-19 06:56:34.193269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.322 [2024-11-19 06:56:34.193406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:42.322 [2024-11-19 06:56:34.193483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.322 [2024-11-19 06:56:34.193506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.580 [2024-11-19 06:56:34.258783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.580 [2024-11-19 06:56:34.258920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:42.580 [2024-11-19 06:56:34.258989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.580 [2024-11-19 06:56:34.259035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.580 [2024-11-19 06:56:34.259127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.580 [2024-11-19 06:56:34.259399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:42.580 [2024-11-19 06:56:34.259424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.580 [2024-11-19 06:56:34.259440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.580 [2024-11-19 06:56:34.259497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.580 [2024-11-19 06:56:34.259509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:42.580 [2024-11-19 06:56:34.259518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.580 [2024-11-19 06:56:34.259527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.580 [2024-11-19 06:56:34.259628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.580 [2024-11-19 06:56:34.259641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:42.580 [2024-11-19 06:56:34.259649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.580 [2024-11-19 06:56:34.259657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.580 [2024-11-19 06:56:34.259686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.580 [2024-11-19 06:56:34.259696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:42.580 [2024-11-19 06:56:34.259704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.581 [2024-11-19 06:56:34.259712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.581 [2024-11-19 06:56:34.259752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.581 [2024-11-19 06:56:34.259761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:42.581 [2024-11-19 06:56:34.259770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.581 [2024-11-19 06:56:34.259778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.581 [2024-11-19 06:56:34.259825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.581 [2024-11-19 06:56:34.259836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:42.581 [2024-11-19 06:56:34.259845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.581 [2024-11-19 06:56:34.259852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.581 [2024-11-19 06:56:34.259999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 206.090 ms, result 0 00:32:43.147 00:32:43.147 00:32:43.147 06:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:45.692 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:45.692 Process with pid 81130 is not found 00:32:45.692 Remove shared memory files 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81130 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81130 ']' 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81130 00:32:45.692 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81130) - No such process 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 81130 is not found' 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_band_md /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_l2p_l1 /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_l2p_l2 /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_l2p_l2_ctx /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_nvc_md /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_p2l_pool /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_sb /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_sb_shm /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_trim_bitmap /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_trim_log /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_trim_md /dev/hugepages/ftl_e8dd10ba-9214-4021-bd1f-0d68f51e30a0_vmap 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:45.692 ************************************ 00:32:45.692 END TEST ftl_restore_fast 00:32:45.692 ************************************ 00:32:45.692 00:32:45.692 real 4m16.883s 00:32:45.692 user 4m4.564s 00:32:45.692 sys 0m12.226s 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.692 06:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:45.692 Process with pid 72278 is not found 00:32:45.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.692 06:56:37 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:45.692 06:56:37 ftl -- ftl/ftl.sh@14 -- # killprocess 72278 00:32:45.692 06:56:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 72278 ']' 00:32:45.692 06:56:37 ftl -- common/autotest_common.sh@958 -- # kill -0 72278 00:32:45.692 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72278) - No such process 00:32:45.692 06:56:37 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 72278 is not found' 00:32:45.692 06:56:37 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:45.692 06:56:37 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83798 00:32:45.692 06:56:37 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83798 00:32:45.692 06:56:37 ftl -- common/autotest_common.sh@835 -- # '[' -z 83798 ']' 00:32:45.692 06:56:37 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.692 06:56:37 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.692 06:56:37 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.692 06:56:37 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.692 06:56:37 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:45.692 06:56:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:45.692 [2024-11-19 06:56:37.382578] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:45.692 [2024-11-19 06:56:37.382697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83798 ] 00:32:45.692 [2024-11-19 06:56:37.535731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.952 [2024-11-19 06:56:37.644871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.523 06:56:38 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.523 06:56:38 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:46.523 06:56:38 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:46.782 nvme0n1 00:32:46.782 06:56:38 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:46.782 06:56:38 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:46.782 06:56:38 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:47.043 06:56:38 ftl -- ftl/common.sh@28 -- # stores=053e59cb-76fe-4fdc-a7f2-6abf229e5e5b 00:32:47.043 06:56:38 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:47.043 06:56:38 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 053e59cb-76fe-4fdc-a7f2-6abf229e5e5b 00:32:47.043 06:56:38 ftl -- ftl/ftl.sh@23 -- # killprocess 83798 00:32:47.043 06:56:38 ftl -- common/autotest_common.sh@954 -- # '[' -z 83798 ']' 00:32:47.043 06:56:38 ftl -- common/autotest_common.sh@958 -- # kill -0 83798 00:32:47.043 06:56:38 ftl -- common/autotest_common.sh@959 -- # uname 00:32:47.043 06:56:38 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.043 06:56:38 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83798 00:32:47.304 06:56:38 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.304 killing process with pid 83798 00:32:47.304 06:56:38 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.304 06:56:38 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83798' 00:32:47.304 06:56:38 ftl -- common/autotest_common.sh@973 -- # kill 83798 00:32:47.304 06:56:38 ftl -- common/autotest_common.sh@978 -- # wait 83798 00:32:48.688 06:56:40 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:48.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:48.949 Waiting for block devices as requested 00:32:48.949 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:49.209 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:49.209 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:49.209 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:54.495 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:54.495 06:56:46 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:54.495 Remove shared memory files 00:32:54.495 06:56:46 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:54.495 06:56:46 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:54.495 06:56:46 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:54.495 06:56:46 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:54.495 06:56:46 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:54.495 06:56:46 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:54.495 00:32:54.495 real 17m17.163s 00:32:54.495 user 19m22.173s 00:32:54.495 sys 1m36.826s 00:32:54.495 06:56:46 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.495 ************************************ 00:32:54.495 END TEST ftl 00:32:54.495 ************************************ 00:32:54.495 06:56:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:54.495 06:56:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:54.495 06:56:46 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:54.495 06:56:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:54.495 06:56:46 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:54.495 06:56:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:54.495 06:56:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:54.495 06:56:46 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:54.495 06:56:46 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:54.495 06:56:46 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:54.495 06:56:46 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:54.495 06:56:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:54.495 06:56:46 -- common/autotest_common.sh@10 -- # set +x 00:32:54.495 06:56:46 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:54.496 06:56:46 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:54.496 06:56:46 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:54.496 06:56:46 -- common/autotest_common.sh@10 -- # set +x 00:32:55.881 INFO: APP EXITING 00:32:55.881 INFO: killing all VMs 00:32:55.881 INFO: killing vhost app 00:32:55.881 INFO: EXIT DONE 00:32:56.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:56.714 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:56.714 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:56.714 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:56.714 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:56.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:57.235 Cleaning 00:32:57.235 Removing: /var/run/dpdk/spdk0/config 00:32:57.235 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:57.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:57.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:57.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:57.505 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:57.505 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:57.505 Removing: /var/run/dpdk/spdk0 00:32:57.505 Removing: /var/run/dpdk/spdk_pid56981 00:32:57.505 Removing: /var/run/dpdk/spdk_pid57183 00:32:57.505 Removing: /var/run/dpdk/spdk_pid57401 00:32:57.505 Removing: /var/run/dpdk/spdk_pid57494 00:32:57.505 Removing: /var/run/dpdk/spdk_pid57534 00:32:57.505 Removing: /var/run/dpdk/spdk_pid57651 00:32:57.505 Removing: /var/run/dpdk/spdk_pid57669 00:32:57.505 Removing: /var/run/dpdk/spdk_pid57862 00:32:57.505 Removing: /var/run/dpdk/spdk_pid57961 00:32:57.505 Removing: /var/run/dpdk/spdk_pid58052 00:32:57.505 Removing: /var/run/dpdk/spdk_pid58157 00:32:57.505 Removing: /var/run/dpdk/spdk_pid58254 00:32:57.505 Removing: /var/run/dpdk/spdk_pid58288 00:32:57.505 Removing: /var/run/dpdk/spdk_pid58324 00:32:57.505 Removing: /var/run/dpdk/spdk_pid58395 00:32:57.505 Removing: /var/run/dpdk/spdk_pid58484 00:32:57.505 Removing: /var/run/dpdk/spdk_pid58916 00:32:57.505 Removing: /var/run/dpdk/spdk_pid58980 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59032 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59048 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59145 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59155 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59257 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59273 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59326 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59344 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59397 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59415 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59564 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59606 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59690 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59862 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59946 00:32:57.505 Removing: /var/run/dpdk/spdk_pid59988 00:32:57.505 Removing: /var/run/dpdk/spdk_pid60409 00:32:57.505 Removing: /var/run/dpdk/spdk_pid60507 00:32:57.505 Removing: /var/run/dpdk/spdk_pid60619 00:32:57.505 Removing: /var/run/dpdk/spdk_pid60674 00:32:57.505 Removing: /var/run/dpdk/spdk_pid60694 00:32:57.505 Removing: /var/run/dpdk/spdk_pid60778 00:32:57.505 Removing: /var/run/dpdk/spdk_pid61399 00:32:57.505 Removing: /var/run/dpdk/spdk_pid61435 00:32:57.505 Removing: /var/run/dpdk/spdk_pid61899 00:32:57.505 Removing: /var/run/dpdk/spdk_pid61997 00:32:57.505 Removing: /var/run/dpdk/spdk_pid62106 00:32:57.505 Removing: /var/run/dpdk/spdk_pid62159 00:32:57.505 Removing: /var/run/dpdk/spdk_pid62179 00:32:57.505 Removing: /var/run/dpdk/spdk_pid62210 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64059 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64196 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64200 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64212 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64257 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64261 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64273 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64318 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64322 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64334 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64380 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64384 00:32:57.506 Removing: /var/run/dpdk/spdk_pid64396 00:32:57.506 Removing: /var/run/dpdk/spdk_pid65768 00:32:57.506 Removing: /var/run/dpdk/spdk_pid65865 00:32:57.506 Removing: /var/run/dpdk/spdk_pid67271 00:32:57.506 Removing: /var/run/dpdk/spdk_pid68644 00:32:57.506 Removing: /var/run/dpdk/spdk_pid68737 00:32:57.506 Removing: /var/run/dpdk/spdk_pid68814 00:32:57.506 Removing: /var/run/dpdk/spdk_pid68890 00:32:57.506 Removing: /var/run/dpdk/spdk_pid68991 00:32:57.506 Removing: /var/run/dpdk/spdk_pid69064 00:32:57.506 Removing: /var/run/dpdk/spdk_pid69202 00:32:57.506 Removing: /var/run/dpdk/spdk_pid69566 00:32:57.506 Removing: /var/run/dpdk/spdk_pid69597 00:32:57.506 Removing: /var/run/dpdk/spdk_pid70035 00:32:57.506 Removing: /var/run/dpdk/spdk_pid70219 00:32:57.506 Removing: /var/run/dpdk/spdk_pid70323 00:32:57.506 Removing: /var/run/dpdk/spdk_pid70433 00:32:57.506 Removing: /var/run/dpdk/spdk_pid70478 00:32:57.506 Removing: /var/run/dpdk/spdk_pid70509 00:32:57.506 Removing: /var/run/dpdk/spdk_pid70801 00:32:57.506 Removing: /var/run/dpdk/spdk_pid70856 00:32:57.506 Removing: /var/run/dpdk/spdk_pid70928 00:32:57.506 Removing: /var/run/dpdk/spdk_pid71322 00:32:57.506 Removing: /var/run/dpdk/spdk_pid71468 00:32:57.506 Removing: /var/run/dpdk/spdk_pid72278 00:32:57.506 Removing: /var/run/dpdk/spdk_pid72418 00:32:57.506 Removing: /var/run/dpdk/spdk_pid72578 00:32:57.506 Removing: /var/run/dpdk/spdk_pid72681 00:32:57.506 Removing: /var/run/dpdk/spdk_pid73055 00:32:57.506 Removing: /var/run/dpdk/spdk_pid73337 00:32:57.506 Removing: /var/run/dpdk/spdk_pid73691 00:32:57.506 Removing: /var/run/dpdk/spdk_pid73876 00:32:57.506 Removing: /var/run/dpdk/spdk_pid74056 00:32:57.506 Removing: /var/run/dpdk/spdk_pid74103 00:32:57.506 Removing: /var/run/dpdk/spdk_pid74296 00:32:57.506 Removing: /var/run/dpdk/spdk_pid74327 00:32:57.506 Removing: /var/run/dpdk/spdk_pid74380 00:32:57.506 Removing: /var/run/dpdk/spdk_pid74647 00:32:57.801 Removing: /var/run/dpdk/spdk_pid74878 00:32:57.801 Removing: /var/run/dpdk/spdk_pid75422 00:32:57.801 Removing: /var/run/dpdk/spdk_pid76193 00:32:57.801 Removing: /var/run/dpdk/spdk_pid76647 00:32:57.801 Removing: /var/run/dpdk/spdk_pid77442 00:32:57.801 Removing: /var/run/dpdk/spdk_pid77595 00:32:57.801 Removing: /var/run/dpdk/spdk_pid77671 00:32:57.801 Removing: /var/run/dpdk/spdk_pid78206 00:32:57.801 Removing: /var/run/dpdk/spdk_pid78260 00:32:57.801 Removing: /var/run/dpdk/spdk_pid78762 00:32:57.801 Removing: /var/run/dpdk/spdk_pid79254 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80111 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80240 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80280 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80343 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80400 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80454 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80625 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80706 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80773 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80862 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80891 00:32:57.801 Removing: /var/run/dpdk/spdk_pid80969 00:32:57.801 Removing: /var/run/dpdk/spdk_pid81130 00:32:57.801 Removing: /var/run/dpdk/spdk_pid81361 00:32:57.801 Removing: /var/run/dpdk/spdk_pid81951 00:32:57.801 Removing: /var/run/dpdk/spdk_pid82645 00:32:57.802 Removing: /var/run/dpdk/spdk_pid83225 00:32:57.802 Removing: /var/run/dpdk/spdk_pid83798 00:32:57.802 Clean 00:32:57.802 06:56:49 -- common/autotest_common.sh@1453 -- # return 0 00:32:57.802 06:56:49 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:57.802 06:56:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.802 06:56:49 -- common/autotest_common.sh@10 -- # set +x 00:32:57.802 06:56:49 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:57.802 06:56:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.802 06:56:49 -- common/autotest_common.sh@10 -- # set +x 00:32:57.802 06:56:49 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:57.802 06:56:49 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:57.802 06:56:49 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:57.802 06:56:49 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:57.802 06:56:49 -- spdk/autotest.sh@398 -- # hostname 00:32:57.802 06:56:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:58.085 geninfo: WARNING: invalid characters removed from testname! 00:33:24.656 06:57:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:26.044 06:57:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:28.593 06:57:20 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:31.896 06:57:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:34.445 06:57:25 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:36.994 06:57:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:38.906 06:57:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:38.906 06:57:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:38.906 06:57:30 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:38.906 06:57:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:38.906 06:57:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:38.906 06:57:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:38.906 + [[ -n 5026 ]] 00:33:38.906 + sudo kill 5026 00:33:38.917 [Pipeline] } 00:33:38.933 [Pipeline] // timeout 00:33:38.940 [Pipeline] } 00:33:38.955 [Pipeline] // stage 00:33:38.962 [Pipeline] } 00:33:38.977 [Pipeline] // catchError 00:33:38.987 [Pipeline] stage 00:33:38.989 [Pipeline] { (Stop VM) 00:33:39.004 [Pipeline] sh 00:33:39.287 + vagrant halt 00:33:41.834 ==> default: Halting domain... 00:33:48.436 [Pipeline] sh 00:33:48.720 + vagrant destroy -f 00:33:51.261 ==> default: Removing domain... 00:33:51.844 [Pipeline] sh 00:33:52.200 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:52.210 [Pipeline] } 00:33:52.226 [Pipeline] // stage 00:33:52.232 [Pipeline] } 00:33:52.247 [Pipeline] // dir 00:33:52.253 [Pipeline] } 00:33:52.269 [Pipeline] // wrap 00:33:52.276 [Pipeline] } 00:33:52.290 [Pipeline] // catchError 00:33:52.300 [Pipeline] stage 00:33:52.302 [Pipeline] { (Epilogue) 00:33:52.316 [Pipeline] sh 00:33:52.598 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:57.888 [Pipeline] catchError 00:33:57.890 [Pipeline] { 00:33:57.903 [Pipeline] sh 00:33:58.188 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:58.188 Artifacts sizes are good 00:33:58.199 [Pipeline] } 00:33:58.213 [Pipeline] // catchError 00:33:58.225 [Pipeline] archiveArtifacts 00:33:58.233 Archiving artifacts 00:33:58.339 [Pipeline] cleanWs 00:33:58.353 [WS-CLEANUP] Deleting project workspace... 00:33:58.353 [WS-CLEANUP] Deferred wipeout is used... 00:33:58.361 [WS-CLEANUP] done 00:33:58.363 [Pipeline] } 00:33:58.379 [Pipeline] // stage 00:33:58.384 [Pipeline] } 00:33:58.398 [Pipeline] // node 00:33:58.403 [Pipeline] End of Pipeline 00:33:58.442 Finished: SUCCESS