00:00:00.001 Started by upstream project "autotest-nightly" build number 4281 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3644 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.165 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.166 The recommended git tool is: git 00:00:00.166 using credential 00000000-0000-0000-0000-000000000002 00:00:00.168 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.216 Fetching changes from the remote Git repository 00:00:00.218 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.258 Using shallow fetch with depth 1 00:00:00.258 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.258 > git --version # timeout=10 00:00:00.295 > git --version # 'git version 2.39.2' 00:00:00.295 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.316 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.316 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.131 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.144 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.156 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.156 > git config core.sparsecheckout # timeout=10 00:00:07.169 > git read-tree -mu HEAD # timeout=10 00:00:07.186 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.204 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.204 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.330 [Pipeline] Start of Pipeline 00:00:07.346 [Pipeline] library 00:00:07.348 Loading library shm_lib@master 00:00:07.348 Library shm_lib@master is cached. Copying from home. 00:00:07.366 [Pipeline] node 00:00:07.379 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.381 [Pipeline] { 00:00:07.390 [Pipeline] catchError 00:00:07.391 [Pipeline] { 00:00:07.405 [Pipeline] wrap 00:00:07.414 [Pipeline] { 00:00:07.423 [Pipeline] stage 00:00:07.425 [Pipeline] { (Prologue) 00:00:07.443 [Pipeline] echo 00:00:07.445 Node: VM-host-SM38 00:00:07.452 [Pipeline] cleanWs 00:00:07.464 [WS-CLEANUP] Deleting project workspace... 00:00:07.464 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.472 [WS-CLEANUP] done 00:00:07.669 [Pipeline] setCustomBuildProperty 00:00:07.737 [Pipeline] httpRequest 00:00:08.796 [Pipeline] echo 00:00:08.797 Sorcerer 10.211.164.20 is alive 00:00:08.806 [Pipeline] retry 00:00:08.808 [Pipeline] { 00:00:08.819 [Pipeline] httpRequest 00:00:08.822 HttpMethod: GET 00:00:08.823 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.824 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.836 Response Code: HTTP/1.1 200 OK 00:00:08.836 Success: Status code 200 is in the accepted range: 200,404 00:00:08.837 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.075 [Pipeline] } 00:00:11.091 [Pipeline] // retry 00:00:11.097 [Pipeline] sh 00:00:11.389 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.408 [Pipeline] httpRequest 00:00:11.776 [Pipeline] echo 00:00:11.778 Sorcerer 10.211.164.20 is alive 00:00:11.786 [Pipeline] retry 00:00:11.788 [Pipeline] { 00:00:11.801 [Pipeline] httpRequest 00:00:11.806 HttpMethod: GET 00:00:11.807 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:11.808 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:11.821 Response Code: HTTP/1.1 200 OK 00:00:11.822 Success: Status code 200 is in the accepted range: 200,404 00:00:11.823 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:15.831 [Pipeline] } 00:01:15.849 [Pipeline] // retry 00:01:15.856 [Pipeline] sh 00:01:16.142 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:19.460 [Pipeline] sh 00:01:19.750 + git -C spdk log --oneline -n5 00:01:19.750 d47eb51c9 bdev: fix a race between reset start and complete 00:01:19.750 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:19.750 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:19.750 4bcab9fb9 correct kick for CQ full case 00:01:19.750 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:19.773 [Pipeline] writeFile 00:01:19.790 [Pipeline] sh 00:01:20.081 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.094 [Pipeline] sh 00:01:20.380 + cat autorun-spdk.conf 00:01:20.380 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.380 SPDK_TEST_NVME=1 00:01:20.380 SPDK_TEST_FTL=1 00:01:20.380 SPDK_TEST_ISAL=1 00:01:20.380 SPDK_RUN_ASAN=1 00:01:20.380 SPDK_RUN_UBSAN=1 00:01:20.380 SPDK_TEST_XNVME=1 00:01:20.380 SPDK_TEST_NVME_FDP=1 00:01:20.380 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.389 RUN_NIGHTLY=1 00:01:20.391 [Pipeline] } 00:01:20.404 [Pipeline] // stage 00:01:20.420 [Pipeline] stage 00:01:20.422 [Pipeline] { (Run VM) 00:01:20.434 [Pipeline] sh 00:01:20.722 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.722 + echo 'Start stage prepare_nvme.sh' 00:01:20.722 Start stage prepare_nvme.sh 00:01:20.722 + [[ -n 10 ]] 00:01:20.722 + disk_prefix=ex10 00:01:20.722 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:20.722 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:20.722 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:20.722 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.722 ++ SPDK_TEST_NVME=1 00:01:20.722 ++ SPDK_TEST_FTL=1 00:01:20.722 ++ SPDK_TEST_ISAL=1 00:01:20.722 ++ SPDK_RUN_ASAN=1 00:01:20.722 ++ SPDK_RUN_UBSAN=1 00:01:20.722 ++ SPDK_TEST_XNVME=1 00:01:20.722 ++ SPDK_TEST_NVME_FDP=1 00:01:20.722 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.722 ++ RUN_NIGHTLY=1 00:01:20.722 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:20.722 + nvme_files=() 00:01:20.722 + declare -A nvme_files 00:01:20.722 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.722 + nvme_files['nvme.img']=5G 00:01:20.722 + nvme_files['nvme-cmb.img']=5G 00:01:20.722 + nvme_files['nvme-multi0.img']=4G 00:01:20.722 + nvme_files['nvme-multi1.img']=4G 00:01:20.722 + nvme_files['nvme-multi2.img']=4G 00:01:20.722 + nvme_files['nvme-openstack.img']=8G 00:01:20.722 + nvme_files['nvme-zns.img']=5G 00:01:20.722 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.722 + (( SPDK_TEST_FTL == 1 )) 00:01:20.722 + nvme_files["nvme-ftl.img"]=6G 00:01:20.722 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.722 + nvme_files["nvme-fdp.img"]=1G 00:01:20.722 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.722 + for nvme in "${!nvme_files[@]}" 00:01:20.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:20.722 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.722 + for nvme in "${!nvme_files[@]}" 00:01:20.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:20.722 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:20.722 + for nvme in "${!nvme_files[@]}" 00:01:20.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:20.722 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.722 + for nvme in "${!nvme_files[@]}" 00:01:20.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:20.722 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.722 + for nvme in "${!nvme_files[@]}" 00:01:20.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:20.722 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.722 + for nvme in "${!nvme_files[@]}" 00:01:20.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:20.722 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.722 + for nvme in "${!nvme_files[@]}" 00:01:20.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:20.983 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.983 + for nvme in "${!nvme_files[@]}" 00:01:20.983 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:20.983 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:20.983 + for nvme in "${!nvme_files[@]}" 00:01:20.983 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:20.983 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.983 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:20.983 + echo 'End stage prepare_nvme.sh' 00:01:20.983 End stage prepare_nvme.sh 00:01:20.997 [Pipeline] sh 00:01:21.285 + DISTRO=fedora39 00:01:21.285 + CPUS=10 00:01:21.285 + RAM=12288 00:01:21.285 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.285 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:21.285 00:01:21.285 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:21.285 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:21.285 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:21.285 HELP=0 00:01:21.285 DRY_RUN=0 00:01:21.285 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:21.285 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:21.285 NVME_AUTO_CREATE=0 00:01:21.285 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:21.285 NVME_CMB=,,,, 00:01:21.285 NVME_PMR=,,,, 00:01:21.285 NVME_ZNS=,,,, 00:01:21.285 NVME_MS=true,,,, 00:01:21.285 NVME_FDP=,,,on, 00:01:21.285 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.285 SPDK_VAGRANT_VMCPU=10 00:01:21.285 SPDK_VAGRANT_VMRAM=12288 00:01:21.285 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.285 SPDK_VAGRANT_HTTP_PROXY= 00:01:21.285 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.285 SPDK_OPENSTACK_NETWORK=0 00:01:21.285 VAGRANT_PACKAGE_BOX=0 00:01:21.285 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.285 FORCE_DISTRO=true 00:01:21.285 VAGRANT_BOX_VERSION= 00:01:21.285 EXTRA_VAGRANTFILES= 00:01:21.285 NIC_MODEL=e1000 00:01:21.285 00:01:21.285 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:21.285 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:23.849 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.111 ==> default: Creating image (snapshot of base box volume). 00:01:24.373 ==> default: Creating domain with the following settings... 00:01:24.373 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731973530_831504ec6a29abe823f3 00:01:24.373 ==> default: -- Domain type: kvm 00:01:24.373 ==> default: -- Cpus: 10 00:01:24.373 ==> default: -- Feature: acpi 00:01:24.373 ==> default: -- Feature: apic 00:01:24.373 ==> default: -- Feature: pae 00:01:24.373 ==> default: -- Memory: 12288M 00:01:24.373 ==> default: -- Memory Backing: hugepages: 00:01:24.373 ==> default: -- Management MAC: 00:01:24.373 ==> default: -- Loader: 00:01:24.373 ==> default: -- Nvram: 00:01:24.373 ==> default: -- Base box: spdk/fedora39 00:01:24.373 ==> default: -- Storage pool: default 00:01:24.373 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731973530_831504ec6a29abe823f3.img (20G) 00:01:24.373 ==> default: -- Volume Cache: default 00:01:24.373 ==> default: -- Kernel: 00:01:24.373 ==> default: -- Initrd: 00:01:24.373 ==> default: -- Graphics Type: vnc 00:01:24.373 ==> default: -- Graphics Port: -1 00:01:24.373 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.373 ==> default: -- Graphics Password: Not defined 00:01:24.373 ==> default: -- Video Type: cirrus 00:01:24.373 ==> default: -- Video VRAM: 9216 00:01:24.373 ==> default: -- Sound Type: 00:01:24.373 ==> default: -- Keymap: en-us 00:01:24.373 ==> default: -- TPM Path: 00:01:24.373 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.373 ==> default: -- Command line args: 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:24.373 ==> default: -> value=-drive, 00:01:24.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:24.373 ==> default: -> value=-drive, 00:01:24.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:24.373 ==> default: -> value=-drive, 00:01:24.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.373 ==> default: -> value=-drive, 00:01:24.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.373 ==> default: -> value=-drive, 00:01:24.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:24.373 ==> default: -> value=-drive, 00:01:24.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:24.373 ==> default: -> value=-device, 00:01:24.373 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.636 ==> default: Creating shared folders metadata... 00:01:24.636 ==> default: Starting domain. 00:01:26.018 ==> default: Waiting for domain to get an IP address... 00:01:44.133 ==> default: Waiting for SSH to become available... 00:01:44.133 ==> default: Configuring and enabling network interfaces... 00:01:46.677 default: SSH address: 192.168.121.49:22 00:01:46.677 default: SSH username: vagrant 00:01:46.677 default: SSH auth method: private key 00:01:49.224 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:57.368 ==> default: Mounting SSHFS shared folder... 00:01:59.917 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:59.917 ==> default: Checking Mount.. 00:02:00.861 ==> default: Folder Successfully Mounted! 00:02:00.862 00:02:00.862 SUCCESS! 00:02:00.862 00:02:00.862 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.862 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.862 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.862 00:02:00.872 [Pipeline] } 00:02:00.891 [Pipeline] // stage 00:02:00.900 [Pipeline] dir 00:02:00.901 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:00.903 [Pipeline] { 00:02:00.916 [Pipeline] catchError 00:02:00.918 [Pipeline] { 00:02:00.932 [Pipeline] sh 00:02:01.219 + vagrant ssh-config --host vagrant 00:02:01.219 + sed -ne '/^Host/,$p' 00:02:01.219 + tee ssh_conf 00:02:03.775 Host vagrant 00:02:03.775 HostName 192.168.121.49 00:02:03.775 User vagrant 00:02:03.775 Port 22 00:02:03.775 UserKnownHostsFile /dev/null 00:02:03.775 StrictHostKeyChecking no 00:02:03.775 PasswordAuthentication no 00:02:03.775 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:03.775 IdentitiesOnly yes 00:02:03.775 LogLevel FATAL 00:02:03.775 ForwardAgent yes 00:02:03.775 ForwardX11 yes 00:02:03.775 00:02:03.792 [Pipeline] withEnv 00:02:03.795 [Pipeline] { 00:02:03.808 [Pipeline] sh 00:02:04.092 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:04.092 source /etc/os-release 00:02:04.092 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.092 # Minimal, systemd-like check. 00:02:04.092 if [[ -e /.dockerenv ]]; then 00:02:04.092 # Clear garbage from the node'\''s name: 00:02:04.092 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.092 # $HOSTNAME is the actual container id 00:02:04.092 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.092 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.092 # We can assume this is a mount from a host where container is running, 00:02:04.093 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.093 container="$(< /etc/hostname) ($agent)" 00:02:04.093 else 00:02:04.093 # Fallback 00:02:04.093 container=$agent 00:02:04.093 fi 00:02:04.093 fi 00:02:04.093 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.093 ' 00:02:04.369 [Pipeline] } 00:02:04.386 [Pipeline] // withEnv 00:02:04.395 [Pipeline] setCustomBuildProperty 00:02:04.409 [Pipeline] stage 00:02:04.411 [Pipeline] { (Tests) 00:02:04.427 [Pipeline] sh 00:02:04.713 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.992 [Pipeline] sh 00:02:05.279 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.559 [Pipeline] timeout 00:02:05.560 Timeout set to expire in 50 min 00:02:05.562 [Pipeline] { 00:02:05.577 [Pipeline] sh 00:02:05.865 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:06.438 HEAD is now at d47eb51c9 bdev: fix a race between reset start and complete 00:02:06.453 [Pipeline] sh 00:02:06.738 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:07.017 [Pipeline] sh 00:02:07.380 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:07.661 [Pipeline] sh 00:02:07.947 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:08.208 ++ readlink -f spdk_repo 00:02:08.208 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.208 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.208 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.208 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.208 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.208 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.208 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.208 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:08.208 + cd /home/vagrant/spdk_repo 00:02:08.208 + source /etc/os-release 00:02:08.208 ++ NAME='Fedora Linux' 00:02:08.208 ++ VERSION='39 (Cloud Edition)' 00:02:08.208 ++ ID=fedora 00:02:08.208 ++ VERSION_ID=39 00:02:08.208 ++ VERSION_CODENAME= 00:02:08.208 ++ PLATFORM_ID=platform:f39 00:02:08.208 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.208 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.208 ++ LOGO=fedora-logo-icon 00:02:08.208 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.208 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.208 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.208 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.208 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.208 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.208 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.208 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.208 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.208 ++ SUPPORT_END=2024-11-12 00:02:08.208 ++ VARIANT='Cloud Edition' 00:02:08.208 ++ VARIANT_ID=cloud 00:02:08.208 + uname -a 00:02:08.208 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.208 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:08.731 Hugepages 00:02:08.731 node hugesize free / total 00:02:08.731 node0 1048576kB 0 / 0 00:02:08.731 node0 2048kB 0 / 0 00:02:08.731 00:02:08.731 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:08.731 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:08.731 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:08.993 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:08.993 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:08.993 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:08.993 + rm -f /tmp/spdk-ld-path 00:02:08.993 + source autorun-spdk.conf 00:02:08.993 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.993 ++ SPDK_TEST_NVME=1 00:02:08.993 ++ SPDK_TEST_FTL=1 00:02:08.993 ++ SPDK_TEST_ISAL=1 00:02:08.993 ++ SPDK_RUN_ASAN=1 00:02:08.993 ++ SPDK_RUN_UBSAN=1 00:02:08.993 ++ SPDK_TEST_XNVME=1 00:02:08.993 ++ SPDK_TEST_NVME_FDP=1 00:02:08.993 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.993 ++ RUN_NIGHTLY=1 00:02:08.993 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:08.993 + [[ -n '' ]] 00:02:08.993 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:08.993 + for M in /var/spdk/build-*-manifest.txt 00:02:08.993 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:08.993 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.993 + for M in /var/spdk/build-*-manifest.txt 00:02:08.993 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.993 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.993 + for M in /var/spdk/build-*-manifest.txt 00:02:08.993 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.993 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.993 ++ uname 00:02:08.993 + [[ Linux == \L\i\n\u\x ]] 00:02:08.993 + sudo dmesg -T 00:02:08.993 + sudo dmesg --clear 00:02:08.993 + dmesg_pid=5035 00:02:08.993 + [[ Fedora Linux == FreeBSD ]] 00:02:08.993 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.993 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.993 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.993 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.993 + sudo dmesg -Tw 00:02:08.993 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.993 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.993 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.993 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.993 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.993 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.993 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.993 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.993 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.993 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.993 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.255 23:46:15 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:09.255 23:46:15 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.255 23:46:15 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:09.255 23:46:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:09.255 23:46:15 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.255 23:46:15 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:09.255 23:46:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.255 23:46:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:09.255 23:46:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.255 23:46:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.255 23:46:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.255 23:46:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.255 23:46:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.255 23:46:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.255 23:46:15 -- paths/export.sh@5 -- $ export PATH 00:02:09.255 23:46:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.255 23:46:15 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.255 23:46:15 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:09.255 23:46:15 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731973575.XXXXXX 00:02:09.255 23:46:15 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731973575.oW4xLZ 00:02:09.255 23:46:15 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:09.255 23:46:15 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:09.255 23:46:15 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:09.255 23:46:15 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.255 23:46:15 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.255 23:46:15 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:09.255 23:46:15 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:09.255 23:46:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.255 23:46:15 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:09.255 23:46:15 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:09.255 23:46:15 -- pm/common@17 -- $ local monitor 00:02:09.255 23:46:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.255 23:46:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.255 23:46:15 -- pm/common@25 -- $ sleep 1 00:02:09.255 23:46:15 -- pm/common@21 -- $ date +%s 00:02:09.255 23:46:15 -- pm/common@21 -- $ date +%s 00:02:09.255 23:46:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731973575 00:02:09.255 23:46:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731973575 00:02:09.255 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731973575_collect-cpu-load.pm.log 00:02:09.255 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731973575_collect-vmstat.pm.log 00:02:10.199 23:46:16 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:10.199 23:46:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.199 23:46:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.199 23:46:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.199 23:46:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.199 Mon Nov 18 11:46:16 PM UTC 2024 00:02:10.199 23:46:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.199 v25.01-pre-190-gd47eb51c9 00:02:10.199 23:46:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:10.199 23:46:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:10.199 23:46:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:10.199 23:46:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:10.199 23:46:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.199 ************************************ 00:02:10.199 START TEST asan 00:02:10.199 ************************************ 00:02:10.199 using asan 00:02:10.199 23:46:16 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:10.199 00:02:10.199 real 0m0.000s 00:02:10.199 user 0m0.000s 00:02:10.199 sys 0m0.000s 00:02:10.199 23:46:16 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:10.199 ************************************ 00:02:10.199 END TEST asan 00:02:10.199 ************************************ 00:02:10.199 23:46:16 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.199 23:46:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.199 23:46:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.199 23:46:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:10.199 23:46:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:10.199 23:46:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.199 ************************************ 00:02:10.199 START TEST ubsan 00:02:10.199 ************************************ 00:02:10.199 using ubsan 00:02:10.199 23:46:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:10.199 00:02:10.199 real 0m0.000s 00:02:10.199 user 0m0.000s 00:02:10.199 sys 0m0.000s 00:02:10.199 23:46:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:10.199 ************************************ 00:02:10.199 END TEST ubsan 00:02:10.199 ************************************ 00:02:10.199 23:46:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.460 23:46:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:10.460 23:46:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.461 23:46:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.461 23:46:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.461 23:46:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.461 23:46:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.461 23:46:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.461 23:46:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.461 23:46:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:10.461 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:10.461 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.032 Using 'verbs' RDMA provider 00:02:24.201 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:34.204 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:34.204 Creating mk/config.mk...done. 00:02:34.204 Creating mk/cc.flags.mk...done. 00:02:34.204 Type 'make' to build. 00:02:34.204 23:46:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:34.204 23:46:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:34.204 23:46:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:34.204 23:46:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.204 ************************************ 00:02:34.204 START TEST make 00:02:34.204 ************************************ 00:02:34.205 23:46:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:34.205 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:34.205 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:34.205 meson setup builddir \ 00:02:34.205 -Dwith-libaio=enabled \ 00:02:34.205 -Dwith-liburing=enabled \ 00:02:34.205 -Dwith-libvfn=disabled \ 00:02:34.205 -Dwith-spdk=disabled \ 00:02:34.205 -Dexamples=false \ 00:02:34.205 -Dtests=false \ 00:02:34.205 -Dtools=false && \ 00:02:34.205 meson compile -C builddir && \ 00:02:34.205 cd -) 00:02:34.205 make[1]: Nothing to be done for 'all'. 00:02:36.119 The Meson build system 00:02:36.119 Version: 1.5.0 00:02:36.119 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:36.119 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:36.119 Build type: native build 00:02:36.119 Project name: xnvme 00:02:36.119 Project version: 0.7.5 00:02:36.119 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.119 C linker for the host machine: cc ld.bfd 2.40-14 00:02:36.119 Host machine cpu family: x86_64 00:02:36.119 Host machine cpu: x86_64 00:02:36.119 Message: host_machine.system: linux 00:02:36.119 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:36.119 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:36.119 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:36.119 Run-time dependency threads found: YES 00:02:36.119 Has header "setupapi.h" : NO 00:02:36.119 Has header "linux/blkzoned.h" : YES 00:02:36.119 Has header "linux/blkzoned.h" : YES (cached) 00:02:36.119 Has header "libaio.h" : YES 00:02:36.119 Library aio found: YES 00:02:36.119 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.119 Run-time dependency liburing found: YES 2.2 00:02:36.119 Dependency libvfn skipped: feature with-libvfn disabled 00:02:36.119 Found CMake: /usr/bin/cmake (3.27.7) 00:02:36.119 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:36.119 Subproject spdk : skipped: feature with-spdk disabled 00:02:36.119 Run-time dependency appleframeworks found: NO (tried framework) 00:02:36.119 Run-time dependency appleframeworks found: NO (tried framework) 00:02:36.119 Library rt found: YES 00:02:36.119 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:36.119 Configuring xnvme_config.h using configuration 00:02:36.119 Configuring xnvme.spec using configuration 00:02:36.119 Run-time dependency bash-completion found: YES 2.11 00:02:36.119 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:36.119 Program cp found: YES (/usr/bin/cp) 00:02:36.119 Build targets in project: 3 00:02:36.119 00:02:36.119 xnvme 0.7.5 00:02:36.119 00:02:36.119 Subprojects 00:02:36.119 spdk : NO Feature 'with-spdk' disabled 00:02:36.119 00:02:36.119 User defined options 00:02:36.119 examples : false 00:02:36.119 tests : false 00:02:36.119 tools : false 00:02:36.119 with-libaio : enabled 00:02:36.119 with-liburing: enabled 00:02:36.119 with-libvfn : disabled 00:02:36.119 with-spdk : disabled 00:02:36.119 00:02:36.119 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.381 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:36.381 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:36.642 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:36.642 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:36.642 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:36.642 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:36.642 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:36.642 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:36.642 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:36.642 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:36.642 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:36.642 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:36.642 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:36.642 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:36.642 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:36.642 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:36.642 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:36.642 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:36.642 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:36.642 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:36.642 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:36.642 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:36.642 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:36.642 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:36.642 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:36.642 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:36.642 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:36.903 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:36.903 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:36.903 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:36.903 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:36.903 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:36.903 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:36.903 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:36.903 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:36.903 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:36.903 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:36.903 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:36.903 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:36.903 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:36.903 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:36.903 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:36.903 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:36.903 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:36.903 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:36.903 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:36.903 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:36.903 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:36.903 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:36.903 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:36.903 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:36.903 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:36.903 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:36.903 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:36.903 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:36.903 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:36.903 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:36.903 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:36.903 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:36.903 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:36.903 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:36.903 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:36.903 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:37.164 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:37.164 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:37.164 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:37.164 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:37.164 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:37.164 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:37.164 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:37.164 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:37.164 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:37.164 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:37.164 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:37.425 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:37.425 [75/76] Linking static target lib/libxnvme.a 00:02:37.686 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:37.686 INFO: autodetecting backend as ninja 00:02:37.686 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:37.686 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:44.259 The Meson build system 00:02:44.259 Version: 1.5.0 00:02:44.259 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:44.259 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:44.259 Build type: native build 00:02:44.259 Program cat found: YES (/usr/bin/cat) 00:02:44.259 Project name: DPDK 00:02:44.259 Project version: 24.03.0 00:02:44.259 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:44.259 C linker for the host machine: cc ld.bfd 2.40-14 00:02:44.259 Host machine cpu family: x86_64 00:02:44.259 Host machine cpu: x86_64 00:02:44.259 Message: ## Building in Developer Mode ## 00:02:44.259 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:44.259 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:44.259 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:44.259 Program python3 found: YES (/usr/bin/python3) 00:02:44.259 Program cat found: YES (/usr/bin/cat) 00:02:44.259 Compiler for C supports arguments -march=native: YES 00:02:44.259 Checking for size of "void *" : 8 00:02:44.259 Checking for size of "void *" : 8 (cached) 00:02:44.259 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:44.259 Library m found: YES 00:02:44.259 Library numa found: YES 00:02:44.259 Has header "numaif.h" : YES 00:02:44.259 Library fdt found: NO 00:02:44.260 Library execinfo found: NO 00:02:44.260 Has header "execinfo.h" : YES 00:02:44.260 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:44.260 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:44.260 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:44.260 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:44.260 Run-time dependency openssl found: YES 3.1.1 00:02:44.260 Run-time dependency libpcap found: YES 1.10.4 00:02:44.260 Has header "pcap.h" with dependency libpcap: YES 00:02:44.260 Compiler for C supports arguments -Wcast-qual: YES 00:02:44.260 Compiler for C supports arguments -Wdeprecated: YES 00:02:44.260 Compiler for C supports arguments -Wformat: YES 00:02:44.260 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:44.260 Compiler for C supports arguments -Wformat-security: NO 00:02:44.260 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:44.260 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:44.260 Compiler for C supports arguments -Wnested-externs: YES 00:02:44.260 Compiler for C supports arguments -Wold-style-definition: YES 00:02:44.260 Compiler for C supports arguments -Wpointer-arith: YES 00:02:44.260 Compiler for C supports arguments -Wsign-compare: YES 00:02:44.260 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:44.260 Compiler for C supports arguments -Wundef: YES 00:02:44.260 Compiler for C supports arguments -Wwrite-strings: YES 00:02:44.260 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:44.260 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:44.260 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:44.260 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:44.260 Program objdump found: YES (/usr/bin/objdump) 00:02:44.260 Compiler for C supports arguments -mavx512f: YES 00:02:44.260 Checking if "AVX512 checking" compiles: YES 00:02:44.260 Fetching value of define "__SSE4_2__" : 1 00:02:44.260 Fetching value of define "__AES__" : 1 00:02:44.260 Fetching value of define "__AVX__" : 1 00:02:44.260 Fetching value of define "__AVX2__" : 1 00:02:44.260 Fetching value of define "__AVX512BW__" : 1 00:02:44.260 Fetching value of define "__AVX512CD__" : 1 00:02:44.260 Fetching value of define "__AVX512DQ__" : 1 00:02:44.260 Fetching value of define "__AVX512F__" : 1 00:02:44.260 Fetching value of define "__AVX512VL__" : 1 00:02:44.260 Fetching value of define "__PCLMUL__" : 1 00:02:44.260 Fetching value of define "__RDRND__" : 1 00:02:44.260 Fetching value of define "__RDSEED__" : 1 00:02:44.260 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:44.260 Fetching value of define "__znver1__" : (undefined) 00:02:44.260 Fetching value of define "__znver2__" : (undefined) 00:02:44.260 Fetching value of define "__znver3__" : (undefined) 00:02:44.260 Fetching value of define "__znver4__" : (undefined) 00:02:44.260 Library asan found: YES 00:02:44.260 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:44.260 Message: lib/log: Defining dependency "log" 00:02:44.260 Message: lib/kvargs: Defining dependency "kvargs" 00:02:44.260 Message: lib/telemetry: Defining dependency "telemetry" 00:02:44.260 Library rt found: YES 00:02:44.260 Checking for function "getentropy" : NO 00:02:44.260 Message: lib/eal: Defining dependency "eal" 00:02:44.260 Message: lib/ring: Defining dependency "ring" 00:02:44.260 Message: lib/rcu: Defining dependency "rcu" 00:02:44.260 Message: lib/mempool: Defining dependency "mempool" 00:02:44.260 Message: lib/mbuf: Defining dependency "mbuf" 00:02:44.260 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:44.260 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:44.260 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:44.260 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:44.260 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:44.260 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:44.260 Compiler for C supports arguments -mpclmul: YES 00:02:44.260 Compiler for C supports arguments -maes: YES 00:02:44.260 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.260 Compiler for C supports arguments -mavx512bw: YES 00:02:44.260 Compiler for C supports arguments -mavx512dq: YES 00:02:44.260 Compiler for C supports arguments -mavx512vl: YES 00:02:44.260 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:44.260 Compiler for C supports arguments -mavx2: YES 00:02:44.260 Compiler for C supports arguments -mavx: YES 00:02:44.260 Message: lib/net: Defining dependency "net" 00:02:44.260 Message: lib/meter: Defining dependency "meter" 00:02:44.260 Message: lib/ethdev: Defining dependency "ethdev" 00:02:44.260 Message: lib/pci: Defining dependency "pci" 00:02:44.260 Message: lib/cmdline: Defining dependency "cmdline" 00:02:44.260 Message: lib/hash: Defining dependency "hash" 00:02:44.260 Message: lib/timer: Defining dependency "timer" 00:02:44.260 Message: lib/compressdev: Defining dependency "compressdev" 00:02:44.260 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:44.260 Message: lib/dmadev: Defining dependency "dmadev" 00:02:44.260 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:44.260 Message: lib/power: Defining dependency "power" 00:02:44.260 Message: lib/reorder: Defining dependency "reorder" 00:02:44.260 Message: lib/security: Defining dependency "security" 00:02:44.260 Has header "linux/userfaultfd.h" : YES 00:02:44.260 Has header "linux/vduse.h" : YES 00:02:44.260 Message: lib/vhost: Defining dependency "vhost" 00:02:44.260 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:44.260 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:44.260 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:44.260 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:44.260 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:44.260 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:44.260 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:44.260 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:44.260 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:44.260 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:44.260 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:44.260 Configuring doxy-api-html.conf using configuration 00:02:44.260 Configuring doxy-api-man.conf using configuration 00:02:44.260 Program mandb found: YES (/usr/bin/mandb) 00:02:44.260 Program sphinx-build found: NO 00:02:44.260 Configuring rte_build_config.h using configuration 00:02:44.260 Message: 00:02:44.260 ================= 00:02:44.260 Applications Enabled 00:02:44.260 ================= 00:02:44.260 00:02:44.260 apps: 00:02:44.260 00:02:44.260 00:02:44.260 Message: 00:02:44.260 ================= 00:02:44.260 Libraries Enabled 00:02:44.260 ================= 00:02:44.260 00:02:44.260 libs: 00:02:44.260 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:44.260 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:44.260 cryptodev, dmadev, power, reorder, security, vhost, 00:02:44.260 00:02:44.260 Message: 00:02:44.260 =============== 00:02:44.260 Drivers Enabled 00:02:44.260 =============== 00:02:44.260 00:02:44.260 common: 00:02:44.260 00:02:44.260 bus: 00:02:44.260 pci, vdev, 00:02:44.260 mempool: 00:02:44.260 ring, 00:02:44.260 dma: 00:02:44.260 00:02:44.260 net: 00:02:44.260 00:02:44.260 crypto: 00:02:44.260 00:02:44.260 compress: 00:02:44.260 00:02:44.260 vdpa: 00:02:44.260 00:02:44.260 00:02:44.260 Message: 00:02:44.260 ================= 00:02:44.260 Content Skipped 00:02:44.260 ================= 00:02:44.260 00:02:44.260 apps: 00:02:44.260 dumpcap: explicitly disabled via build config 00:02:44.260 graph: explicitly disabled via build config 00:02:44.260 pdump: explicitly disabled via build config 00:02:44.260 proc-info: explicitly disabled via build config 00:02:44.260 test-acl: explicitly disabled via build config 00:02:44.260 test-bbdev: explicitly disabled via build config 00:02:44.260 test-cmdline: explicitly disabled via build config 00:02:44.260 test-compress-perf: explicitly disabled via build config 00:02:44.260 test-crypto-perf: explicitly disabled via build config 00:02:44.260 test-dma-perf: explicitly disabled via build config 00:02:44.260 test-eventdev: explicitly disabled via build config 00:02:44.260 test-fib: explicitly disabled via build config 00:02:44.260 test-flow-perf: explicitly disabled via build config 00:02:44.260 test-gpudev: explicitly disabled via build config 00:02:44.260 test-mldev: explicitly disabled via build config 00:02:44.260 test-pipeline: explicitly disabled via build config 00:02:44.260 test-pmd: explicitly disabled via build config 00:02:44.260 test-regex: explicitly disabled via build config 00:02:44.260 test-sad: explicitly disabled via build config 00:02:44.260 test-security-perf: explicitly disabled via build config 00:02:44.260 00:02:44.260 libs: 00:02:44.260 argparse: explicitly disabled via build config 00:02:44.260 metrics: explicitly disabled via build config 00:02:44.260 acl: explicitly disabled via build config 00:02:44.260 bbdev: explicitly disabled via build config 00:02:44.260 bitratestats: explicitly disabled via build config 00:02:44.260 bpf: explicitly disabled via build config 00:02:44.260 cfgfile: explicitly disabled via build config 00:02:44.260 distributor: explicitly disabled via build config 00:02:44.260 efd: explicitly disabled via build config 00:02:44.260 eventdev: explicitly disabled via build config 00:02:44.260 dispatcher: explicitly disabled via build config 00:02:44.260 gpudev: explicitly disabled via build config 00:02:44.260 gro: explicitly disabled via build config 00:02:44.260 gso: explicitly disabled via build config 00:02:44.260 ip_frag: explicitly disabled via build config 00:02:44.260 jobstats: explicitly disabled via build config 00:02:44.260 latencystats: explicitly disabled via build config 00:02:44.260 lpm: explicitly disabled via build config 00:02:44.260 member: explicitly disabled via build config 00:02:44.260 pcapng: explicitly disabled via build config 00:02:44.260 rawdev: explicitly disabled via build config 00:02:44.261 regexdev: explicitly disabled via build config 00:02:44.261 mldev: explicitly disabled via build config 00:02:44.261 rib: explicitly disabled via build config 00:02:44.261 sched: explicitly disabled via build config 00:02:44.261 stack: explicitly disabled via build config 00:02:44.261 ipsec: explicitly disabled via build config 00:02:44.261 pdcp: explicitly disabled via build config 00:02:44.261 fib: explicitly disabled via build config 00:02:44.261 port: explicitly disabled via build config 00:02:44.261 pdump: explicitly disabled via build config 00:02:44.261 table: explicitly disabled via build config 00:02:44.261 pipeline: explicitly disabled via build config 00:02:44.261 graph: explicitly disabled via build config 00:02:44.261 node: explicitly disabled via build config 00:02:44.261 00:02:44.261 drivers: 00:02:44.261 common/cpt: not in enabled drivers build config 00:02:44.261 common/dpaax: not in enabled drivers build config 00:02:44.261 common/iavf: not in enabled drivers build config 00:02:44.261 common/idpf: not in enabled drivers build config 00:02:44.261 common/ionic: not in enabled drivers build config 00:02:44.261 common/mvep: not in enabled drivers build config 00:02:44.261 common/octeontx: not in enabled drivers build config 00:02:44.261 bus/auxiliary: not in enabled drivers build config 00:02:44.261 bus/cdx: not in enabled drivers build config 00:02:44.261 bus/dpaa: not in enabled drivers build config 00:02:44.261 bus/fslmc: not in enabled drivers build config 00:02:44.261 bus/ifpga: not in enabled drivers build config 00:02:44.261 bus/platform: not in enabled drivers build config 00:02:44.261 bus/uacce: not in enabled drivers build config 00:02:44.261 bus/vmbus: not in enabled drivers build config 00:02:44.261 common/cnxk: not in enabled drivers build config 00:02:44.261 common/mlx5: not in enabled drivers build config 00:02:44.261 common/nfp: not in enabled drivers build config 00:02:44.261 common/nitrox: not in enabled drivers build config 00:02:44.261 common/qat: not in enabled drivers build config 00:02:44.261 common/sfc_efx: not in enabled drivers build config 00:02:44.261 mempool/bucket: not in enabled drivers build config 00:02:44.261 mempool/cnxk: not in enabled drivers build config 00:02:44.261 mempool/dpaa: not in enabled drivers build config 00:02:44.261 mempool/dpaa2: not in enabled drivers build config 00:02:44.261 mempool/octeontx: not in enabled drivers build config 00:02:44.261 mempool/stack: not in enabled drivers build config 00:02:44.261 dma/cnxk: not in enabled drivers build config 00:02:44.261 dma/dpaa: not in enabled drivers build config 00:02:44.261 dma/dpaa2: not in enabled drivers build config 00:02:44.261 dma/hisilicon: not in enabled drivers build config 00:02:44.261 dma/idxd: not in enabled drivers build config 00:02:44.261 dma/ioat: not in enabled drivers build config 00:02:44.261 dma/skeleton: not in enabled drivers build config 00:02:44.261 net/af_packet: not in enabled drivers build config 00:02:44.261 net/af_xdp: not in enabled drivers build config 00:02:44.261 net/ark: not in enabled drivers build config 00:02:44.261 net/atlantic: not in enabled drivers build config 00:02:44.261 net/avp: not in enabled drivers build config 00:02:44.261 net/axgbe: not in enabled drivers build config 00:02:44.261 net/bnx2x: not in enabled drivers build config 00:02:44.261 net/bnxt: not in enabled drivers build config 00:02:44.261 net/bonding: not in enabled drivers build config 00:02:44.261 net/cnxk: not in enabled drivers build config 00:02:44.261 net/cpfl: not in enabled drivers build config 00:02:44.261 net/cxgbe: not in enabled drivers build config 00:02:44.261 net/dpaa: not in enabled drivers build config 00:02:44.261 net/dpaa2: not in enabled drivers build config 00:02:44.261 net/e1000: not in enabled drivers build config 00:02:44.261 net/ena: not in enabled drivers build config 00:02:44.261 net/enetc: not in enabled drivers build config 00:02:44.261 net/enetfec: not in enabled drivers build config 00:02:44.261 net/enic: not in enabled drivers build config 00:02:44.261 net/failsafe: not in enabled drivers build config 00:02:44.261 net/fm10k: not in enabled drivers build config 00:02:44.261 net/gve: not in enabled drivers build config 00:02:44.261 net/hinic: not in enabled drivers build config 00:02:44.261 net/hns3: not in enabled drivers build config 00:02:44.261 net/i40e: not in enabled drivers build config 00:02:44.261 net/iavf: not in enabled drivers build config 00:02:44.261 net/ice: not in enabled drivers build config 00:02:44.261 net/idpf: not in enabled drivers build config 00:02:44.261 net/igc: not in enabled drivers build config 00:02:44.261 net/ionic: not in enabled drivers build config 00:02:44.261 net/ipn3ke: not in enabled drivers build config 00:02:44.261 net/ixgbe: not in enabled drivers build config 00:02:44.261 net/mana: not in enabled drivers build config 00:02:44.261 net/memif: not in enabled drivers build config 00:02:44.261 net/mlx4: not in enabled drivers build config 00:02:44.261 net/mlx5: not in enabled drivers build config 00:02:44.261 net/mvneta: not in enabled drivers build config 00:02:44.261 net/mvpp2: not in enabled drivers build config 00:02:44.261 net/netvsc: not in enabled drivers build config 00:02:44.261 net/nfb: not in enabled drivers build config 00:02:44.261 net/nfp: not in enabled drivers build config 00:02:44.261 net/ngbe: not in enabled drivers build config 00:02:44.261 net/null: not in enabled drivers build config 00:02:44.261 net/octeontx: not in enabled drivers build config 00:02:44.261 net/octeon_ep: not in enabled drivers build config 00:02:44.261 net/pcap: not in enabled drivers build config 00:02:44.261 net/pfe: not in enabled drivers build config 00:02:44.261 net/qede: not in enabled drivers build config 00:02:44.261 net/ring: not in enabled drivers build config 00:02:44.261 net/sfc: not in enabled drivers build config 00:02:44.261 net/softnic: not in enabled drivers build config 00:02:44.261 net/tap: not in enabled drivers build config 00:02:44.261 net/thunderx: not in enabled drivers build config 00:02:44.261 net/txgbe: not in enabled drivers build config 00:02:44.261 net/vdev_netvsc: not in enabled drivers build config 00:02:44.261 net/vhost: not in enabled drivers build config 00:02:44.261 net/virtio: not in enabled drivers build config 00:02:44.261 net/vmxnet3: not in enabled drivers build config 00:02:44.261 raw/*: missing internal dependency, "rawdev" 00:02:44.261 crypto/armv8: not in enabled drivers build config 00:02:44.261 crypto/bcmfs: not in enabled drivers build config 00:02:44.261 crypto/caam_jr: not in enabled drivers build config 00:02:44.261 crypto/ccp: not in enabled drivers build config 00:02:44.261 crypto/cnxk: not in enabled drivers build config 00:02:44.261 crypto/dpaa_sec: not in enabled drivers build config 00:02:44.261 crypto/dpaa2_sec: not in enabled drivers build config 00:02:44.261 crypto/ipsec_mb: not in enabled drivers build config 00:02:44.261 crypto/mlx5: not in enabled drivers build config 00:02:44.261 crypto/mvsam: not in enabled drivers build config 00:02:44.261 crypto/nitrox: not in enabled drivers build config 00:02:44.261 crypto/null: not in enabled drivers build config 00:02:44.261 crypto/octeontx: not in enabled drivers build config 00:02:44.261 crypto/openssl: not in enabled drivers build config 00:02:44.261 crypto/scheduler: not in enabled drivers build config 00:02:44.261 crypto/uadk: not in enabled drivers build config 00:02:44.261 crypto/virtio: not in enabled drivers build config 00:02:44.261 compress/isal: not in enabled drivers build config 00:02:44.261 compress/mlx5: not in enabled drivers build config 00:02:44.261 compress/nitrox: not in enabled drivers build config 00:02:44.261 compress/octeontx: not in enabled drivers build config 00:02:44.261 compress/zlib: not in enabled drivers build config 00:02:44.261 regex/*: missing internal dependency, "regexdev" 00:02:44.261 ml/*: missing internal dependency, "mldev" 00:02:44.261 vdpa/ifc: not in enabled drivers build config 00:02:44.261 vdpa/mlx5: not in enabled drivers build config 00:02:44.261 vdpa/nfp: not in enabled drivers build config 00:02:44.261 vdpa/sfc: not in enabled drivers build config 00:02:44.261 event/*: missing internal dependency, "eventdev" 00:02:44.261 baseband/*: missing internal dependency, "bbdev" 00:02:44.261 gpu/*: missing internal dependency, "gpudev" 00:02:44.261 00:02:44.261 00:02:44.261 Build targets in project: 84 00:02:44.261 00:02:44.261 DPDK 24.03.0 00:02:44.261 00:02:44.261 User defined options 00:02:44.261 buildtype : debug 00:02:44.261 default_library : shared 00:02:44.261 libdir : lib 00:02:44.261 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:44.261 b_sanitize : address 00:02:44.261 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:44.261 c_link_args : 00:02:44.261 cpu_instruction_set: native 00:02:44.261 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:44.261 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:44.261 enable_docs : false 00:02:44.261 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:44.261 enable_kmods : false 00:02:44.261 max_lcores : 128 00:02:44.261 tests : false 00:02:44.261 00:02:44.261 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.261 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:44.261 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.261 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.520 [3/267] Linking static target lib/librte_kvargs.a 00:02:44.520 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.520 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.520 [6/267] Linking static target lib/librte_log.a 00:02:44.520 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.778 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.778 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.778 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.778 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.778 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.778 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.778 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.778 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.778 [16/267] Linking static target lib/librte_telemetry.a 00:02:44.778 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.778 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:45.043 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.043 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:45.043 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.043 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.314 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.314 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:45.314 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:45.314 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:45.314 [27/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.314 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:45.314 [29/267] Linking target lib/librte_log.so.24.1 00:02:45.314 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:45.572 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:45.572 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:45.572 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:45.572 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.572 [35/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:45.572 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:45.572 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:45.572 [38/267] Linking target lib/librte_kvargs.so.24.1 00:02:45.572 [39/267] Linking target lib/librte_telemetry.so.24.1 00:02:45.572 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:45.572 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:45.830 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:45.830 [43/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.830 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.830 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:45.831 [46/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:45.831 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:46.089 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:46.089 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:46.089 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:46.089 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:46.089 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:46.089 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:46.089 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:46.089 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:46.347 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:46.347 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:46.347 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:46.347 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:46.347 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:46.347 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:46.347 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:46.606 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:46.606 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:46.606 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:46.606 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:46.606 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:46.606 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:46.864 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:46.864 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:46.864 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:46.864 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:46.864 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:46.864 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:46.864 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:46.864 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:47.123 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.123 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:47.123 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:47.123 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:47.123 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:47.123 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:47.381 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:47.381 [84/267] Linking static target lib/librte_ring.a 00:02:47.381 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:47.381 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:47.381 [87/267] Linking static target lib/librte_eal.a 00:02:47.641 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:47.641 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:47.641 [90/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:47.641 [91/267] Linking static target lib/librte_rcu.a 00:02:47.641 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:47.641 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:47.641 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:47.641 [95/267] Linking static target lib/librte_mempool.a 00:02:47.641 [96/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.641 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:47.900 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:47.900 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:47.900 [100/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:47.900 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:48.159 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.159 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:48.159 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:48.159 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.159 [106/267] Linking static target lib/librte_mbuf.a 00:02:48.159 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:48.159 [108/267] Linking static target lib/librte_net.a 00:02:48.418 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.418 [110/267] Linking static target lib/librte_meter.a 00:02:48.418 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.418 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:48.418 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.418 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:48.418 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.677 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:48.677 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.677 [118/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.935 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:48.935 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:48.935 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.935 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.194 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.194 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.194 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.194 [126/267] Linking static target lib/librte_pci.a 00:02:49.194 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.194 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.453 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:49.453 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:49.453 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:49.453 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:49.453 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.453 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:49.453 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:49.453 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:49.453 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.453 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:49.711 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:49.711 [140/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:49.711 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.711 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.711 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.711 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.711 [145/267] Linking static target lib/librte_cmdline.a 00:02:49.711 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:49.711 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:49.970 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.970 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.970 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:49.970 [151/267] Linking static target lib/librte_timer.a 00:02:49.970 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:50.228 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:50.228 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.228 [155/267] Linking static target lib/librte_ethdev.a 00:02:50.228 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:50.228 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:50.486 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:50.487 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:50.487 [160/267] Linking static target lib/librte_compressdev.a 00:02:50.487 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.487 [162/267] Linking static target lib/librte_hash.a 00:02:50.487 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.487 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:50.487 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:50.487 [166/267] Linking static target lib/librte_dmadev.a 00:02:50.487 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.746 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.005 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.005 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.005 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.005 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.005 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.263 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.263 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:51.263 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.263 [177/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.263 [178/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:51.263 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:51.263 [180/267] Linking static target lib/librte_cryptodev.a 00:02:51.263 [181/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.263 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.521 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.521 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.521 [185/267] Linking static target lib/librte_power.a 00:02:51.779 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.779 [187/267] Linking static target lib/librte_reorder.a 00:02:51.779 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.779 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:52.037 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:52.037 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:52.037 [192/267] Linking static target lib/librte_security.a 00:02:52.037 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.294 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:52.552 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.552 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.552 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.552 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.552 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:52.552 [200/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.808 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:52.808 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:53.067 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:53.067 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.067 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:53.067 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:53.067 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:53.067 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.067 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.324 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.324 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.324 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.324 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.324 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.324 [215/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.324 [216/267] Linking static target drivers/librte_bus_vdev.a 00:02:53.324 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.324 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:53.324 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.324 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.582 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.582 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.582 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.582 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:53.582 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.840 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.404 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:55.336 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.336 [229/267] Linking target lib/librte_eal.so.24.1 00:02:55.336 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:55.336 [231/267] Linking target lib/librte_ring.so.24.1 00:02:55.336 [232/267] Linking target lib/librte_timer.so.24.1 00:02:55.336 [233/267] Linking target lib/librte_pci.so.24.1 00:02:55.336 [234/267] Linking target lib/librte_meter.so.24.1 00:02:55.336 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:55.336 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:55.594 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:55.594 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:55.594 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:55.594 [240/267] Linking target lib/librte_mempool.so.24.1 00:02:55.594 [241/267] Linking target lib/librte_rcu.so.24.1 00:02:55.594 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:55.594 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:55.594 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:55.594 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:55.594 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:55.594 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:55.594 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:55.875 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:55.875 [250/267] Linking target lib/librte_net.so.24.1 00:02:55.875 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:55.875 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:55.875 [253/267] Linking target lib/librte_compressdev.so.24.1 00:02:55.875 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:55.875 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:55.875 [256/267] Linking target lib/librte_hash.so.24.1 00:02:55.875 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:55.875 [258/267] Linking target lib/librte_security.so.24.1 00:02:55.875 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.133 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:56.133 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:56.133 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:56.133 [263/267] Linking target lib/librte_power.so.24.1 00:02:58.031 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:58.031 [265/267] Linking static target lib/librte_vhost.a 00:02:58.964 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.964 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:58.964 INFO: autodetecting backend as ninja 00:02:58.964 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.158 CC lib/ut_mock/mock.o 00:03:11.158 CC lib/ut/ut.o 00:03:11.158 CC lib/log/log.o 00:03:11.158 CC lib/log/log_flags.o 00:03:11.158 CC lib/log/log_deprecated.o 00:03:11.158 LIB libspdk_ut_mock.a 00:03:11.158 LIB libspdk_ut.a 00:03:11.158 LIB libspdk_log.a 00:03:11.158 SO libspdk_ut_mock.so.6.0 00:03:11.158 SO libspdk_ut.so.2.0 00:03:11.158 SO libspdk_log.so.7.1 00:03:11.158 SYMLINK libspdk_ut_mock.so 00:03:11.158 SYMLINK libspdk_ut.so 00:03:11.158 SYMLINK libspdk_log.so 00:03:11.158 CXX lib/trace_parser/trace.o 00:03:11.158 CC lib/util/base64.o 00:03:11.158 CC lib/util/cpuset.o 00:03:11.158 CC lib/util/bit_array.o 00:03:11.158 CC lib/util/crc16.o 00:03:11.158 CC lib/dma/dma.o 00:03:11.158 CC lib/util/crc32.o 00:03:11.158 CC lib/util/crc32c.o 00:03:11.158 CC lib/ioat/ioat.o 00:03:11.158 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.158 CC lib/util/crc32_ieee.o 00:03:11.158 CC lib/util/crc64.o 00:03:11.158 CC lib/util/dif.o 00:03:11.158 CC lib/util/fd.o 00:03:11.158 LIB libspdk_dma.a 00:03:11.158 CC lib/util/fd_group.o 00:03:11.158 SO libspdk_dma.so.5.0 00:03:11.158 CC lib/util/file.o 00:03:11.158 CC lib/vfio_user/host/vfio_user.o 00:03:11.416 CC lib/util/hexlify.o 00:03:11.416 SYMLINK libspdk_dma.so 00:03:11.416 CC lib/util/iov.o 00:03:11.416 LIB libspdk_ioat.a 00:03:11.416 CC lib/util/math.o 00:03:11.416 SO libspdk_ioat.so.7.0 00:03:11.416 CC lib/util/net.o 00:03:11.416 CC lib/util/pipe.o 00:03:11.416 SYMLINK libspdk_ioat.so 00:03:11.416 CC lib/util/strerror_tls.o 00:03:11.416 CC lib/util/string.o 00:03:11.416 CC lib/util/uuid.o 00:03:11.416 LIB libspdk_vfio_user.a 00:03:11.416 CC lib/util/xor.o 00:03:11.416 CC lib/util/zipf.o 00:03:11.416 SO libspdk_vfio_user.so.5.0 00:03:11.416 CC lib/util/md5.o 00:03:11.416 SYMLINK libspdk_vfio_user.so 00:03:11.674 LIB libspdk_util.a 00:03:11.932 SO libspdk_util.so.10.1 00:03:11.932 LIB libspdk_trace_parser.a 00:03:11.932 SO libspdk_trace_parser.so.6.0 00:03:11.932 SYMLINK libspdk_util.so 00:03:11.932 SYMLINK libspdk_trace_parser.so 00:03:12.189 CC lib/vmd/vmd.o 00:03:12.189 CC lib/json/json_parse.o 00:03:12.190 CC lib/vmd/led.o 00:03:12.190 CC lib/json/json_write.o 00:03:12.190 CC lib/env_dpdk/env.o 00:03:12.190 CC lib/idxd/idxd.o 00:03:12.190 CC lib/json/json_util.o 00:03:12.190 CC lib/env_dpdk/memory.o 00:03:12.190 CC lib/conf/conf.o 00:03:12.190 CC lib/rdma_utils/rdma_utils.o 00:03:12.190 CC lib/env_dpdk/pci.o 00:03:12.190 LIB libspdk_conf.a 00:03:12.190 SO libspdk_conf.so.6.0 00:03:12.190 CC lib/env_dpdk/init.o 00:03:12.190 SYMLINK libspdk_conf.so 00:03:12.190 CC lib/env_dpdk/threads.o 00:03:12.190 CC lib/idxd/idxd_user.o 00:03:12.448 LIB libspdk_rdma_utils.a 00:03:12.448 LIB libspdk_json.a 00:03:12.448 SO libspdk_rdma_utils.so.1.0 00:03:12.448 SO libspdk_json.so.6.0 00:03:12.448 SYMLINK libspdk_rdma_utils.so 00:03:12.448 SYMLINK libspdk_json.so 00:03:12.448 CC lib/env_dpdk/pci_ioat.o 00:03:12.448 CC lib/rdma_provider/common.o 00:03:12.448 CC lib/idxd/idxd_kernel.o 00:03:12.448 CC lib/jsonrpc/jsonrpc_server.o 00:03:12.448 CC lib/env_dpdk/pci_virtio.o 00:03:12.448 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:12.717 CC lib/jsonrpc/jsonrpc_client.o 00:03:12.717 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:12.717 CC lib/env_dpdk/pci_vmd.o 00:03:12.717 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:12.717 LIB libspdk_idxd.a 00:03:12.717 CC lib/env_dpdk/pci_idxd.o 00:03:12.717 SO libspdk_idxd.so.12.1 00:03:12.717 LIB libspdk_vmd.a 00:03:12.717 CC lib/env_dpdk/pci_event.o 00:03:12.717 SO libspdk_vmd.so.6.0 00:03:12.717 CC lib/env_dpdk/sigbus_handler.o 00:03:12.717 CC lib/env_dpdk/pci_dpdk.o 00:03:12.717 SYMLINK libspdk_idxd.so 00:03:12.717 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:12.717 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:12.717 SYMLINK libspdk_vmd.so 00:03:12.717 LIB libspdk_jsonrpc.a 00:03:12.975 LIB libspdk_rdma_provider.a 00:03:12.975 SO libspdk_jsonrpc.so.6.0 00:03:12.975 SO libspdk_rdma_provider.so.7.0 00:03:12.975 SYMLINK libspdk_jsonrpc.so 00:03:12.975 SYMLINK libspdk_rdma_provider.so 00:03:13.234 CC lib/rpc/rpc.o 00:03:13.234 LIB libspdk_env_dpdk.a 00:03:13.234 SO libspdk_env_dpdk.so.15.1 00:03:13.234 LIB libspdk_rpc.a 00:03:13.492 SO libspdk_rpc.so.6.0 00:03:13.492 SYMLINK libspdk_rpc.so 00:03:13.492 SYMLINK libspdk_env_dpdk.so 00:03:13.492 CC lib/notify/notify.o 00:03:13.492 CC lib/notify/notify_rpc.o 00:03:13.492 CC lib/trace/trace.o 00:03:13.492 CC lib/trace/trace_flags.o 00:03:13.492 CC lib/trace/trace_rpc.o 00:03:13.492 CC lib/keyring/keyring_rpc.o 00:03:13.492 CC lib/keyring/keyring.o 00:03:13.750 LIB libspdk_notify.a 00:03:13.750 SO libspdk_notify.so.6.0 00:03:13.750 SYMLINK libspdk_notify.so 00:03:13.750 LIB libspdk_keyring.a 00:03:13.750 LIB libspdk_trace.a 00:03:13.751 SO libspdk_keyring.so.2.0 00:03:13.751 SO libspdk_trace.so.11.0 00:03:14.009 SYMLINK libspdk_keyring.so 00:03:14.009 SYMLINK libspdk_trace.so 00:03:14.009 CC lib/thread/thread.o 00:03:14.009 CC lib/thread/iobuf.o 00:03:14.009 CC lib/sock/sock.o 00:03:14.009 CC lib/sock/sock_rpc.o 00:03:14.576 LIB libspdk_sock.a 00:03:14.576 SO libspdk_sock.so.10.0 00:03:14.576 SYMLINK libspdk_sock.so 00:03:14.834 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:14.834 CC lib/nvme/nvme_fabric.o 00:03:14.834 CC lib/nvme/nvme_ns.o 00:03:14.834 CC lib/nvme/nvme_ctrlr.o 00:03:14.834 CC lib/nvme/nvme_ns_cmd.o 00:03:14.834 CC lib/nvme/nvme_pcie.o 00:03:14.834 CC lib/nvme/nvme_pcie_common.o 00:03:14.834 CC lib/nvme/nvme_qpair.o 00:03:14.834 CC lib/nvme/nvme.o 00:03:15.400 CC lib/nvme/nvme_quirks.o 00:03:15.400 CC lib/nvme/nvme_transport.o 00:03:15.400 CC lib/nvme/nvme_discovery.o 00:03:15.400 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:15.400 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:15.659 LIB libspdk_thread.a 00:03:15.659 CC lib/nvme/nvme_tcp.o 00:03:15.659 SO libspdk_thread.so.11.0 00:03:15.659 CC lib/nvme/nvme_opal.o 00:03:15.659 SYMLINK libspdk_thread.so 00:03:15.659 CC lib/nvme/nvme_io_msg.o 00:03:15.917 CC lib/nvme/nvme_poll_group.o 00:03:15.917 CC lib/nvme/nvme_zns.o 00:03:15.917 CC lib/nvme/nvme_stubs.o 00:03:15.917 CC lib/nvme/nvme_auth.o 00:03:15.917 CC lib/nvme/nvme_cuse.o 00:03:16.174 CC lib/nvme/nvme_rdma.o 00:03:16.174 CC lib/accel/accel.o 00:03:16.432 CC lib/blob/blobstore.o 00:03:16.432 CC lib/accel/accel_rpc.o 00:03:16.432 CC lib/init/json_config.o 00:03:16.691 CC lib/virtio/virtio.o 00:03:16.691 CC lib/accel/accel_sw.o 00:03:16.691 CC lib/init/subsystem.o 00:03:16.691 CC lib/blob/request.o 00:03:16.949 CC lib/virtio/virtio_vhost_user.o 00:03:16.949 CC lib/virtio/virtio_vfio_user.o 00:03:16.949 CC lib/fsdev/fsdev.o 00:03:16.949 CC lib/init/subsystem_rpc.o 00:03:16.949 CC lib/fsdev/fsdev_io.o 00:03:16.949 CC lib/fsdev/fsdev_rpc.o 00:03:16.949 CC lib/init/rpc.o 00:03:17.208 CC lib/virtio/virtio_pci.o 00:03:17.208 CC lib/blob/zeroes.o 00:03:17.208 CC lib/blob/blob_bs_dev.o 00:03:17.208 LIB libspdk_init.a 00:03:17.208 SO libspdk_init.so.6.0 00:03:17.208 SYMLINK libspdk_init.so 00:03:17.466 LIB libspdk_virtio.a 00:03:17.467 LIB libspdk_accel.a 00:03:17.467 SO libspdk_virtio.so.7.0 00:03:17.467 SO libspdk_accel.so.16.0 00:03:17.467 LIB libspdk_nvme.a 00:03:17.467 CC lib/event/app.o 00:03:17.467 CC lib/event/app_rpc.o 00:03:17.467 CC lib/event/scheduler_static.o 00:03:17.467 CC lib/event/reactor.o 00:03:17.467 CC lib/event/log_rpc.o 00:03:17.467 SYMLINK libspdk_virtio.so 00:03:17.467 SYMLINK libspdk_accel.so 00:03:17.467 LIB libspdk_fsdev.a 00:03:17.467 SO libspdk_fsdev.so.2.0 00:03:17.467 SO libspdk_nvme.so.15.0 00:03:17.724 CC lib/bdev/bdev.o 00:03:17.724 CC lib/bdev/bdev_rpc.o 00:03:17.724 CC lib/bdev/part.o 00:03:17.724 CC lib/bdev/bdev_zone.o 00:03:17.724 SYMLINK libspdk_fsdev.so 00:03:17.724 CC lib/bdev/scsi_nvme.o 00:03:17.724 SYMLINK libspdk_nvme.so 00:03:17.724 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:17.983 LIB libspdk_event.a 00:03:17.983 SO libspdk_event.so.14.0 00:03:17.983 SYMLINK libspdk_event.so 00:03:18.549 LIB libspdk_fuse_dispatcher.a 00:03:18.549 SO libspdk_fuse_dispatcher.so.1.0 00:03:18.549 SYMLINK libspdk_fuse_dispatcher.so 00:03:19.117 LIB libspdk_blob.a 00:03:19.117 SO libspdk_blob.so.11.0 00:03:19.376 SYMLINK libspdk_blob.so 00:03:19.635 CC lib/blobfs/blobfs.o 00:03:19.635 CC lib/blobfs/tree.o 00:03:19.635 CC lib/lvol/lvol.o 00:03:19.893 LIB libspdk_bdev.a 00:03:19.893 SO libspdk_bdev.so.17.0 00:03:19.893 SYMLINK libspdk_bdev.so 00:03:20.151 CC lib/nbd/nbd.o 00:03:20.151 CC lib/nbd/nbd_rpc.o 00:03:20.151 CC lib/scsi/dev.o 00:03:20.151 CC lib/scsi/lun.o 00:03:20.151 CC lib/scsi/port.o 00:03:20.151 CC lib/ublk/ublk.o 00:03:20.151 CC lib/nvmf/ctrlr.o 00:03:20.151 CC lib/ftl/ftl_core.o 00:03:20.151 CC lib/ftl/ftl_init.o 00:03:20.151 CC lib/ftl/ftl_layout.o 00:03:20.410 CC lib/ftl/ftl_debug.o 00:03:20.410 CC lib/ftl/ftl_io.o 00:03:20.410 LIB libspdk_blobfs.a 00:03:20.410 SO libspdk_blobfs.so.10.0 00:03:20.410 CC lib/scsi/scsi.o 00:03:20.410 LIB libspdk_lvol.a 00:03:20.410 SYMLINK libspdk_blobfs.so 00:03:20.410 CC lib/ublk/ublk_rpc.o 00:03:20.410 SO libspdk_lvol.so.10.0 00:03:20.410 CC lib/ftl/ftl_sb.o 00:03:20.410 LIB libspdk_nbd.a 00:03:20.410 CC lib/ftl/ftl_l2p.o 00:03:20.670 SO libspdk_nbd.so.7.0 00:03:20.670 CC lib/ftl/ftl_l2p_flat.o 00:03:20.670 SYMLINK libspdk_lvol.so 00:03:20.670 CC lib/ftl/ftl_nv_cache.o 00:03:20.670 CC lib/ftl/ftl_band.o 00:03:20.670 SYMLINK libspdk_nbd.so 00:03:20.670 CC lib/ftl/ftl_band_ops.o 00:03:20.670 CC lib/scsi/scsi_bdev.o 00:03:20.670 CC lib/scsi/scsi_pr.o 00:03:20.670 CC lib/scsi/scsi_rpc.o 00:03:20.670 CC lib/ftl/ftl_writer.o 00:03:20.670 CC lib/ftl/ftl_rq.o 00:03:20.670 LIB libspdk_ublk.a 00:03:20.670 SO libspdk_ublk.so.3.0 00:03:20.929 CC lib/ftl/ftl_reloc.o 00:03:20.929 SYMLINK libspdk_ublk.so 00:03:20.929 CC lib/scsi/task.o 00:03:20.929 CC lib/ftl/ftl_l2p_cache.o 00:03:20.929 CC lib/ftl/ftl_p2l.o 00:03:20.929 CC lib/ftl/ftl_p2l_log.o 00:03:20.929 CC lib/ftl/mngt/ftl_mngt.o 00:03:20.929 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:20.929 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:21.188 LIB libspdk_scsi.a 00:03:21.188 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:21.188 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:21.188 SO libspdk_scsi.so.9.0 00:03:21.188 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:21.188 CC lib/nvmf/ctrlr_discovery.o 00:03:21.188 SYMLINK libspdk_scsi.so 00:03:21.188 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:21.188 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:21.188 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:21.446 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:21.446 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:21.446 CC lib/iscsi/conn.o 00:03:21.446 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:21.446 CC lib/iscsi/init_grp.o 00:03:21.446 CC lib/iscsi/iscsi.o 00:03:21.707 CC lib/vhost/vhost.o 00:03:21.707 CC lib/iscsi/param.o 00:03:21.707 CC lib/iscsi/portal_grp.o 00:03:21.707 CC lib/iscsi/tgt_node.o 00:03:21.707 CC lib/iscsi/iscsi_subsystem.o 00:03:21.707 CC lib/iscsi/iscsi_rpc.o 00:03:21.707 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:21.707 CC lib/iscsi/task.o 00:03:22.090 CC lib/nvmf/ctrlr_bdev.o 00:03:22.090 CC lib/nvmf/subsystem.o 00:03:22.090 CC lib/ftl/utils/ftl_conf.o 00:03:22.090 CC lib/ftl/utils/ftl_md.o 00:03:22.090 CC lib/vhost/vhost_rpc.o 00:03:22.090 CC lib/vhost/vhost_scsi.o 00:03:22.090 CC lib/vhost/vhost_blk.o 00:03:22.090 CC lib/vhost/rte_vhost_user.o 00:03:22.090 CC lib/ftl/utils/ftl_mempool.o 00:03:22.376 CC lib/ftl/utils/ftl_bitmap.o 00:03:22.376 CC lib/ftl/utils/ftl_property.o 00:03:22.376 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:22.376 CC lib/nvmf/nvmf.o 00:03:22.376 CC lib/nvmf/nvmf_rpc.o 00:03:22.634 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:22.634 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:22.634 CC lib/nvmf/transport.o 00:03:22.634 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:22.634 LIB libspdk_iscsi.a 00:03:22.634 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:22.895 SO libspdk_iscsi.so.8.0 00:03:22.895 CC lib/nvmf/tcp.o 00:03:22.895 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:22.895 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:22.895 SYMLINK libspdk_iscsi.so 00:03:22.895 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:22.895 CC lib/nvmf/stubs.o 00:03:22.895 LIB libspdk_vhost.a 00:03:22.895 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:22.895 SO libspdk_vhost.so.8.0 00:03:22.895 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:23.153 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:23.153 SYMLINK libspdk_vhost.so 00:03:23.153 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:23.153 CC lib/nvmf/mdns_server.o 00:03:23.153 CC lib/nvmf/rdma.o 00:03:23.153 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:23.153 CC lib/nvmf/auth.o 00:03:23.153 CC lib/ftl/base/ftl_base_dev.o 00:03:23.153 CC lib/ftl/base/ftl_base_bdev.o 00:03:23.153 CC lib/ftl/ftl_trace.o 00:03:23.413 LIB libspdk_ftl.a 00:03:23.674 SO libspdk_ftl.so.9.0 00:03:23.935 SYMLINK libspdk_ftl.so 00:03:25.319 LIB libspdk_nvmf.a 00:03:25.319 SO libspdk_nvmf.so.20.0 00:03:25.580 SYMLINK libspdk_nvmf.so 00:03:25.839 CC module/env_dpdk/env_dpdk_rpc.o 00:03:25.839 CC module/scheduler/gscheduler/gscheduler.o 00:03:25.839 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:25.839 CC module/accel/ioat/accel_ioat.o 00:03:25.839 CC module/keyring/file/keyring.o 00:03:25.839 CC module/sock/posix/posix.o 00:03:25.839 CC module/blob/bdev/blob_bdev.o 00:03:25.839 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:25.839 CC module/accel/error/accel_error.o 00:03:25.839 CC module/fsdev/aio/fsdev_aio.o 00:03:25.839 LIB libspdk_env_dpdk_rpc.a 00:03:25.839 SO libspdk_env_dpdk_rpc.so.6.0 00:03:26.098 LIB libspdk_scheduler_gscheduler.a 00:03:26.098 SYMLINK libspdk_env_dpdk_rpc.so 00:03:26.098 SO libspdk_scheduler_gscheduler.so.4.0 00:03:26.098 CC module/accel/error/accel_error_rpc.o 00:03:26.098 LIB libspdk_scheduler_dpdk_governor.a 00:03:26.098 CC module/keyring/file/keyring_rpc.o 00:03:26.098 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:26.098 SYMLINK libspdk_scheduler_gscheduler.so 00:03:26.098 CC module/accel/ioat/accel_ioat_rpc.o 00:03:26.098 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:26.098 LIB libspdk_scheduler_dynamic.a 00:03:26.098 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:26.098 SO libspdk_scheduler_dynamic.so.4.0 00:03:26.098 LIB libspdk_blob_bdev.a 00:03:26.098 LIB libspdk_accel_error.a 00:03:26.098 SO libspdk_blob_bdev.so.11.0 00:03:26.098 SO libspdk_accel_error.so.2.0 00:03:26.098 LIB libspdk_keyring_file.a 00:03:26.098 SYMLINK libspdk_scheduler_dynamic.so 00:03:26.098 SO libspdk_keyring_file.so.2.0 00:03:26.098 CC module/keyring/linux/keyring.o 00:03:26.098 SYMLINK libspdk_blob_bdev.so 00:03:26.098 CC module/keyring/linux/keyring_rpc.o 00:03:26.098 LIB libspdk_accel_ioat.a 00:03:26.098 SYMLINK libspdk_accel_error.so 00:03:26.098 CC module/fsdev/aio/linux_aio_mgr.o 00:03:26.098 SYMLINK libspdk_keyring_file.so 00:03:26.098 CC module/accel/dsa/accel_dsa.o 00:03:26.098 CC module/accel/dsa/accel_dsa_rpc.o 00:03:26.098 SO libspdk_accel_ioat.so.6.0 00:03:26.357 SYMLINK libspdk_accel_ioat.so 00:03:26.357 LIB libspdk_keyring_linux.a 00:03:26.357 CC module/accel/iaa/accel_iaa.o 00:03:26.357 SO libspdk_keyring_linux.so.1.0 00:03:26.357 SYMLINK libspdk_keyring_linux.so 00:03:26.357 CC module/accel/iaa/accel_iaa_rpc.o 00:03:26.357 CC module/bdev/error/vbdev_error.o 00:03:26.357 LIB libspdk_accel_dsa.a 00:03:26.357 CC module/bdev/delay/vbdev_delay.o 00:03:26.357 CC module/bdev/gpt/gpt.o 00:03:26.357 SO libspdk_accel_dsa.so.5.0 00:03:26.357 CC module/bdev/error/vbdev_error_rpc.o 00:03:26.357 LIB libspdk_accel_iaa.a 00:03:26.357 CC module/blobfs/bdev/blobfs_bdev.o 00:03:26.616 SO libspdk_accel_iaa.so.3.0 00:03:26.616 SYMLINK libspdk_accel_dsa.so 00:03:26.616 CC module/bdev/lvol/vbdev_lvol.o 00:03:26.616 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:26.616 SYMLINK libspdk_accel_iaa.so 00:03:26.616 LIB libspdk_fsdev_aio.a 00:03:26.616 CC module/bdev/gpt/vbdev_gpt.o 00:03:26.616 SO libspdk_fsdev_aio.so.1.0 00:03:26.616 LIB libspdk_sock_posix.a 00:03:26.616 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:26.616 SO libspdk_sock_posix.so.6.0 00:03:26.616 LIB libspdk_bdev_error.a 00:03:26.616 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:26.616 SYMLINK libspdk_fsdev_aio.so 00:03:26.616 SO libspdk_bdev_error.so.6.0 00:03:26.616 SYMLINK libspdk_sock_posix.so 00:03:26.616 SYMLINK libspdk_bdev_error.so 00:03:26.875 LIB libspdk_bdev_gpt.a 00:03:26.875 CC module/bdev/malloc/bdev_malloc.o 00:03:26.875 LIB libspdk_bdev_delay.a 00:03:26.875 CC module/bdev/null/bdev_null.o 00:03:26.875 SO libspdk_bdev_gpt.so.6.0 00:03:26.875 CC module/bdev/nvme/bdev_nvme.o 00:03:26.875 SO libspdk_bdev_delay.so.6.0 00:03:26.875 LIB libspdk_blobfs_bdev.a 00:03:26.875 SO libspdk_blobfs_bdev.so.6.0 00:03:26.875 SYMLINK libspdk_bdev_gpt.so 00:03:26.875 CC module/bdev/null/bdev_null_rpc.o 00:03:26.875 CC module/bdev/passthru/vbdev_passthru.o 00:03:26.875 CC module/bdev/raid/bdev_raid.o 00:03:26.875 SYMLINK libspdk_bdev_delay.so 00:03:26.875 CC module/bdev/raid/bdev_raid_rpc.o 00:03:26.875 SYMLINK libspdk_blobfs_bdev.so 00:03:26.875 CC module/bdev/raid/bdev_raid_sb.o 00:03:26.875 CC module/bdev/raid/raid0.o 00:03:26.875 LIB libspdk_bdev_lvol.a 00:03:26.875 SO libspdk_bdev_lvol.so.6.0 00:03:27.134 LIB libspdk_bdev_null.a 00:03:27.134 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:27.134 SO libspdk_bdev_null.so.6.0 00:03:27.134 SYMLINK libspdk_bdev_lvol.so 00:03:27.134 CC module/bdev/raid/raid1.o 00:03:27.134 CC module/bdev/raid/concat.o 00:03:27.134 SYMLINK libspdk_bdev_null.so 00:03:27.134 CC module/bdev/split/vbdev_split.o 00:03:27.134 CC module/bdev/split/vbdev_split_rpc.o 00:03:27.134 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:27.134 LIB libspdk_bdev_malloc.a 00:03:27.134 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:27.134 SO libspdk_bdev_malloc.so.6.0 00:03:27.134 CC module/bdev/xnvme/bdev_xnvme.o 00:03:27.134 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:27.393 SYMLINK libspdk_bdev_malloc.so 00:03:27.393 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:27.393 LIB libspdk_bdev_passthru.a 00:03:27.393 LIB libspdk_bdev_split.a 00:03:27.393 SO libspdk_bdev_passthru.so.6.0 00:03:27.393 SO libspdk_bdev_split.so.6.0 00:03:27.393 SYMLINK libspdk_bdev_split.so 00:03:27.393 CC module/bdev/aio/bdev_aio.o 00:03:27.393 SYMLINK libspdk_bdev_passthru.so 00:03:27.393 CC module/bdev/aio/bdev_aio_rpc.o 00:03:27.393 CC module/bdev/ftl/bdev_ftl.o 00:03:27.393 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:27.393 LIB libspdk_bdev_xnvme.a 00:03:27.393 CC module/bdev/iscsi/bdev_iscsi.o 00:03:27.393 SO libspdk_bdev_xnvme.so.3.0 00:03:27.652 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:27.652 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:27.652 LIB libspdk_bdev_zone_block.a 00:03:27.652 SO libspdk_bdev_zone_block.so.6.0 00:03:27.652 SYMLINK libspdk_bdev_xnvme.so 00:03:27.652 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:27.652 SYMLINK libspdk_bdev_zone_block.so 00:03:27.652 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:27.652 CC module/bdev/nvme/nvme_rpc.o 00:03:27.652 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:27.652 LIB libspdk_bdev_aio.a 00:03:27.652 LIB libspdk_bdev_raid.a 00:03:27.652 LIB libspdk_bdev_ftl.a 00:03:27.652 SO libspdk_bdev_aio.so.6.0 00:03:27.652 SO libspdk_bdev_raid.so.6.0 00:03:27.652 SO libspdk_bdev_ftl.so.6.0 00:03:27.910 SYMLINK libspdk_bdev_ftl.so 00:03:27.910 SYMLINK libspdk_bdev_aio.so 00:03:27.910 SYMLINK libspdk_bdev_raid.so 00:03:27.910 CC module/bdev/nvme/bdev_mdns_client.o 00:03:27.910 CC module/bdev/nvme/vbdev_opal.o 00:03:27.910 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:27.910 LIB libspdk_bdev_iscsi.a 00:03:27.910 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:27.910 SO libspdk_bdev_iscsi.so.6.0 00:03:27.910 SYMLINK libspdk_bdev_iscsi.so 00:03:27.910 LIB libspdk_bdev_virtio.a 00:03:27.910 SO libspdk_bdev_virtio.so.6.0 00:03:28.168 SYMLINK libspdk_bdev_virtio.so 00:03:28.735 LIB libspdk_bdev_nvme.a 00:03:28.995 SO libspdk_bdev_nvme.so.7.1 00:03:28.995 SYMLINK libspdk_bdev_nvme.so 00:03:29.562 CC module/event/subsystems/vmd/vmd.o 00:03:29.562 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:29.562 CC module/event/subsystems/sock/sock.o 00:03:29.562 CC module/event/subsystems/scheduler/scheduler.o 00:03:29.562 CC module/event/subsystems/iobuf/iobuf.o 00:03:29.562 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:29.562 CC module/event/subsystems/keyring/keyring.o 00:03:29.562 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:29.562 CC module/event/subsystems/fsdev/fsdev.o 00:03:29.562 LIB libspdk_event_sock.a 00:03:29.562 LIB libspdk_event_scheduler.a 00:03:29.562 LIB libspdk_event_keyring.a 00:03:29.562 LIB libspdk_event_vhost_blk.a 00:03:29.562 SO libspdk_event_sock.so.5.0 00:03:29.562 LIB libspdk_event_vmd.a 00:03:29.562 SO libspdk_event_keyring.so.1.0 00:03:29.562 SO libspdk_event_vhost_blk.so.3.0 00:03:29.562 SO libspdk_event_scheduler.so.4.0 00:03:29.562 LIB libspdk_event_iobuf.a 00:03:29.562 LIB libspdk_event_fsdev.a 00:03:29.562 SO libspdk_event_vmd.so.6.0 00:03:29.562 SO libspdk_event_fsdev.so.1.0 00:03:29.562 SO libspdk_event_iobuf.so.3.0 00:03:29.562 SYMLINK libspdk_event_keyring.so 00:03:29.562 SYMLINK libspdk_event_sock.so 00:03:29.562 SYMLINK libspdk_event_scheduler.so 00:03:29.562 SYMLINK libspdk_event_vhost_blk.so 00:03:29.562 SYMLINK libspdk_event_vmd.so 00:03:29.562 SYMLINK libspdk_event_fsdev.so 00:03:29.562 SYMLINK libspdk_event_iobuf.so 00:03:29.820 CC module/event/subsystems/accel/accel.o 00:03:29.820 LIB libspdk_event_accel.a 00:03:30.077 SO libspdk_event_accel.so.6.0 00:03:30.077 SYMLINK libspdk_event_accel.so 00:03:30.334 CC module/event/subsystems/bdev/bdev.o 00:03:30.334 LIB libspdk_event_bdev.a 00:03:30.334 SO libspdk_event_bdev.so.6.0 00:03:30.334 SYMLINK libspdk_event_bdev.so 00:03:30.593 CC module/event/subsystems/nbd/nbd.o 00:03:30.593 CC module/event/subsystems/scsi/scsi.o 00:03:30.593 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:30.593 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:30.593 CC module/event/subsystems/ublk/ublk.o 00:03:30.851 LIB libspdk_event_nbd.a 00:03:30.851 LIB libspdk_event_ublk.a 00:03:30.851 LIB libspdk_event_scsi.a 00:03:30.851 SO libspdk_event_nbd.so.6.0 00:03:30.851 SO libspdk_event_ublk.so.3.0 00:03:30.851 SO libspdk_event_scsi.so.6.0 00:03:30.851 SYMLINK libspdk_event_nbd.so 00:03:30.851 SYMLINK libspdk_event_ublk.so 00:03:30.851 SYMLINK libspdk_event_scsi.so 00:03:30.851 LIB libspdk_event_nvmf.a 00:03:30.851 SO libspdk_event_nvmf.so.6.0 00:03:30.851 SYMLINK libspdk_event_nvmf.so 00:03:31.108 CC module/event/subsystems/iscsi/iscsi.o 00:03:31.108 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:31.108 LIB libspdk_event_vhost_scsi.a 00:03:31.108 LIB libspdk_event_iscsi.a 00:03:31.108 SO libspdk_event_vhost_scsi.so.3.0 00:03:31.108 SO libspdk_event_iscsi.so.6.0 00:03:31.108 SYMLINK libspdk_event_vhost_scsi.so 00:03:31.108 SYMLINK libspdk_event_iscsi.so 00:03:31.366 SO libspdk.so.6.0 00:03:31.366 SYMLINK libspdk.so 00:03:31.628 CC app/trace_record/trace_record.o 00:03:31.628 CXX app/trace/trace.o 00:03:31.628 TEST_HEADER include/spdk/accel.h 00:03:31.628 TEST_HEADER include/spdk/accel_module.h 00:03:31.628 TEST_HEADER include/spdk/assert.h 00:03:31.628 TEST_HEADER include/spdk/barrier.h 00:03:31.628 TEST_HEADER include/spdk/base64.h 00:03:31.628 TEST_HEADER include/spdk/bdev.h 00:03:31.628 TEST_HEADER include/spdk/bdev_module.h 00:03:31.628 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.628 TEST_HEADER include/spdk/bit_array.h 00:03:31.628 TEST_HEADER include/spdk/bit_pool.h 00:03:31.628 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.628 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.628 CC app/nvmf_tgt/nvmf_main.o 00:03:31.628 TEST_HEADER include/spdk/blobfs.h 00:03:31.628 TEST_HEADER include/spdk/blob.h 00:03:31.628 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:31.628 TEST_HEADER include/spdk/conf.h 00:03:31.628 TEST_HEADER include/spdk/config.h 00:03:31.628 TEST_HEADER include/spdk/cpuset.h 00:03:31.628 TEST_HEADER include/spdk/crc16.h 00:03:31.628 TEST_HEADER include/spdk/crc32.h 00:03:31.628 TEST_HEADER include/spdk/crc64.h 00:03:31.628 TEST_HEADER include/spdk/dif.h 00:03:31.628 TEST_HEADER include/spdk/dma.h 00:03:31.628 CC examples/ioat/perf/perf.o 00:03:31.628 TEST_HEADER include/spdk/endian.h 00:03:31.628 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.628 TEST_HEADER include/spdk/env.h 00:03:31.628 TEST_HEADER include/spdk/event.h 00:03:31.628 TEST_HEADER include/spdk/fd_group.h 00:03:31.628 TEST_HEADER include/spdk/fd.h 00:03:31.628 TEST_HEADER include/spdk/file.h 00:03:31.628 TEST_HEADER include/spdk/fsdev.h 00:03:31.628 TEST_HEADER include/spdk/fsdev_module.h 00:03:31.628 TEST_HEADER include/spdk/ftl.h 00:03:31.628 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:31.628 CC examples/util/zipf/zipf.o 00:03:31.628 CC test/thread/poller_perf/poller_perf.o 00:03:31.628 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.628 TEST_HEADER include/spdk/hexlify.h 00:03:31.628 TEST_HEADER include/spdk/histogram_data.h 00:03:31.628 TEST_HEADER include/spdk/idxd.h 00:03:31.628 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.628 TEST_HEADER include/spdk/init.h 00:03:31.628 CC test/dma/test_dma/test_dma.o 00:03:31.628 TEST_HEADER include/spdk/ioat.h 00:03:31.628 CC test/app/bdev_svc/bdev_svc.o 00:03:31.628 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.628 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.628 TEST_HEADER include/spdk/json.h 00:03:31.628 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.628 TEST_HEADER include/spdk/keyring.h 00:03:31.628 TEST_HEADER include/spdk/keyring_module.h 00:03:31.628 TEST_HEADER include/spdk/likely.h 00:03:31.628 TEST_HEADER include/spdk/log.h 00:03:31.628 TEST_HEADER include/spdk/lvol.h 00:03:31.628 TEST_HEADER include/spdk/md5.h 00:03:31.628 TEST_HEADER include/spdk/memory.h 00:03:31.628 TEST_HEADER include/spdk/mmio.h 00:03:31.628 TEST_HEADER include/spdk/nbd.h 00:03:31.628 TEST_HEADER include/spdk/net.h 00:03:31.628 TEST_HEADER include/spdk/notify.h 00:03:31.628 TEST_HEADER include/spdk/nvme.h 00:03:31.628 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.628 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.628 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.628 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.628 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.628 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.628 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.628 TEST_HEADER include/spdk/nvmf.h 00:03:31.628 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.628 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.628 TEST_HEADER include/spdk/opal.h 00:03:31.628 TEST_HEADER include/spdk/opal_spec.h 00:03:31.628 TEST_HEADER include/spdk/pci_ids.h 00:03:31.628 TEST_HEADER include/spdk/pipe.h 00:03:31.628 TEST_HEADER include/spdk/queue.h 00:03:31.628 TEST_HEADER include/spdk/reduce.h 00:03:31.628 TEST_HEADER include/spdk/rpc.h 00:03:31.628 TEST_HEADER include/spdk/scheduler.h 00:03:31.628 TEST_HEADER include/spdk/scsi.h 00:03:31.628 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.628 TEST_HEADER include/spdk/sock.h 00:03:31.628 TEST_HEADER include/spdk/stdinc.h 00:03:31.628 TEST_HEADER include/spdk/string.h 00:03:31.628 TEST_HEADER include/spdk/thread.h 00:03:31.628 TEST_HEADER include/spdk/trace.h 00:03:31.628 TEST_HEADER include/spdk/trace_parser.h 00:03:31.628 TEST_HEADER include/spdk/tree.h 00:03:31.628 TEST_HEADER include/spdk/ublk.h 00:03:31.628 TEST_HEADER include/spdk/util.h 00:03:31.628 TEST_HEADER include/spdk/uuid.h 00:03:31.628 TEST_HEADER include/spdk/version.h 00:03:31.628 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:31.628 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:31.628 TEST_HEADER include/spdk/vhost.h 00:03:31.628 TEST_HEADER include/spdk/vmd.h 00:03:31.628 LINK zipf 00:03:31.628 TEST_HEADER include/spdk/xor.h 00:03:31.628 TEST_HEADER include/spdk/zipf.h 00:03:31.628 CXX test/cpp_headers/accel.o 00:03:31.909 LINK nvmf_tgt 00:03:31.909 LINK spdk_trace_record 00:03:31.909 LINK interrupt_tgt 00:03:31.909 LINK poller_perf 00:03:31.909 LINK bdev_svc 00:03:31.909 LINK ioat_perf 00:03:31.909 LINK spdk_trace 00:03:31.909 CXX test/cpp_headers/accel_module.o 00:03:31.909 CC examples/ioat/verify/verify.o 00:03:32.201 CC app/iscsi_tgt/iscsi_tgt.o 00:03:32.201 CC examples/sock/hello_world/hello_sock.o 00:03:32.201 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.201 CC examples/thread/thread/thread_ex.o 00:03:32.201 CXX test/cpp_headers/assert.o 00:03:32.201 CC app/spdk_tgt/spdk_tgt.o 00:03:32.201 CC examples/vmd/led/led.o 00:03:32.201 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:32.201 LINK verify 00:03:32.201 LINK test_dma 00:03:32.201 LINK lsvmd 00:03:32.201 LINK iscsi_tgt 00:03:32.201 CXX test/cpp_headers/barrier.o 00:03:32.201 LINK spdk_tgt 00:03:32.201 LINK led 00:03:32.201 LINK hello_sock 00:03:32.201 LINK thread 00:03:32.201 CXX test/cpp_headers/base64.o 00:03:32.459 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:32.459 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:32.459 CC test/app/histogram_perf/histogram_perf.o 00:03:32.459 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:32.459 CXX test/cpp_headers/bdev.o 00:03:32.459 CC test/app/jsoncat/jsoncat.o 00:03:32.459 CC app/spdk_lspci/spdk_lspci.o 00:03:32.459 LINK histogram_perf 00:03:32.459 LINK nvme_fuzz 00:03:32.459 CXX test/cpp_headers/bdev_module.o 00:03:32.459 CC app/spdk_nvme_perf/perf.o 00:03:32.459 CC app/spdk_nvme_identify/identify.o 00:03:32.459 LINK jsoncat 00:03:32.717 LINK spdk_lspci 00:03:32.717 CC examples/idxd/perf/perf.o 00:03:32.717 CC test/app/stub/stub.o 00:03:32.717 CXX test/cpp_headers/bdev_zone.o 00:03:32.717 LINK vhost_fuzz 00:03:32.717 CC app/spdk_nvme_discover/discovery_aer.o 00:03:32.717 CC test/event/event_perf/event_perf.o 00:03:32.717 CC test/env/mem_callbacks/mem_callbacks.o 00:03:32.974 LINK stub 00:03:32.974 CXX test/cpp_headers/bit_array.o 00:03:32.974 LINK event_perf 00:03:32.974 CC test/event/reactor/reactor.o 00:03:32.974 LINK idxd_perf 00:03:32.974 LINK spdk_nvme_discover 00:03:32.974 CXX test/cpp_headers/bit_pool.o 00:03:32.974 LINK reactor 00:03:33.232 CC app/spdk_top/spdk_top.o 00:03:33.232 CXX test/cpp_headers/blob_bdev.o 00:03:33.232 CC app/vhost/vhost.o 00:03:33.232 CC test/env/vtophys/vtophys.o 00:03:33.232 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:33.232 CC test/event/reactor_perf/reactor_perf.o 00:03:33.232 LINK spdk_nvme_identify 00:03:33.232 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.232 LINK vhost 00:03:33.489 LINK mem_callbacks 00:03:33.489 LINK vtophys 00:03:33.489 LINK spdk_nvme_perf 00:03:33.489 CXX test/cpp_headers/blobfs.o 00:03:33.489 LINK reactor_perf 00:03:33.489 CXX test/cpp_headers/blob.o 00:03:33.489 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.489 LINK hello_fsdev 00:03:33.489 CC test/event/app_repeat/app_repeat.o 00:03:33.489 CC test/env/memory/memory_ut.o 00:03:33.489 CC test/env/pci/pci_ut.o 00:03:33.489 CC test/event/scheduler/scheduler.o 00:03:33.747 CXX test/cpp_headers/conf.o 00:03:33.747 LINK app_repeat 00:03:33.747 LINK env_dpdk_post_init 00:03:33.747 CC test/nvme/aer/aer.o 00:03:33.747 CXX test/cpp_headers/config.o 00:03:33.747 CXX test/cpp_headers/cpuset.o 00:03:33.747 CXX test/cpp_headers/crc16.o 00:03:33.747 LINK scheduler 00:03:33.747 CC examples/accel/perf/accel_perf.o 00:03:34.005 LINK aer 00:03:34.005 CXX test/cpp_headers/crc32.o 00:03:34.005 CC examples/blob/hello_world/hello_blob.o 00:03:34.005 LINK pci_ut 00:03:34.005 CC examples/blob/cli/blobcli.o 00:03:34.005 LINK spdk_top 00:03:34.005 LINK iscsi_fuzz 00:03:34.005 CC app/spdk_dd/spdk_dd.o 00:03:34.005 CXX test/cpp_headers/crc64.o 00:03:34.264 CC test/nvme/reset/reset.o 00:03:34.264 LINK hello_blob 00:03:34.264 CC test/nvme/sgl/sgl.o 00:03:34.264 CXX test/cpp_headers/dif.o 00:03:34.264 CC test/nvme/e2edp/nvme_dp.o 00:03:34.264 CC test/nvme/overhead/overhead.o 00:03:34.264 LINK accel_perf 00:03:34.264 LINK reset 00:03:34.264 CXX test/cpp_headers/dma.o 00:03:34.522 LINK spdk_dd 00:03:34.522 LINK memory_ut 00:03:34.522 CC app/fio/nvme/fio_plugin.o 00:03:34.522 CXX test/cpp_headers/endian.o 00:03:34.522 LINK nvme_dp 00:03:34.522 LINK overhead 00:03:34.522 LINK sgl 00:03:34.522 LINK blobcli 00:03:34.522 CC app/fio/bdev/fio_plugin.o 00:03:34.522 CC examples/nvme/hello_world/hello_world.o 00:03:34.522 CXX test/cpp_headers/env_dpdk.o 00:03:34.780 CC examples/nvme/reconnect/reconnect.o 00:03:34.780 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.780 CC examples/nvme/arbitration/arbitration.o 00:03:34.780 CXX test/cpp_headers/env.o 00:03:34.780 CC test/nvme/err_injection/err_injection.o 00:03:34.780 CC examples/nvme/hotplug/hotplug.o 00:03:34.780 LINK hello_world 00:03:34.780 CC examples/bdev/hello_world/hello_bdev.o 00:03:34.780 CXX test/cpp_headers/event.o 00:03:34.780 LINK err_injection 00:03:34.780 LINK spdk_nvme 00:03:35.037 LINK spdk_bdev 00:03:35.037 LINK hello_bdev 00:03:35.037 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:35.037 LINK hotplug 00:03:35.037 CXX test/cpp_headers/fd_group.o 00:03:35.037 LINK reconnect 00:03:35.037 LINK arbitration 00:03:35.037 CC test/nvme/startup/startup.o 00:03:35.037 CC test/rpc_client/rpc_client_test.o 00:03:35.037 CXX test/cpp_headers/fd.o 00:03:35.037 LINK cmb_copy 00:03:35.037 LINK nvme_manage 00:03:35.296 CXX test/cpp_headers/file.o 00:03:35.296 LINK startup 00:03:35.296 CC test/accel/dif/dif.o 00:03:35.296 CC examples/bdev/bdevperf/bdevperf.o 00:03:35.296 CC examples/nvme/abort/abort.o 00:03:35.296 LINK rpc_client_test 00:03:35.296 CC test/blobfs/mkfs/mkfs.o 00:03:35.296 CXX test/cpp_headers/fsdev.o 00:03:35.296 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.296 CC test/nvme/reserve/reserve.o 00:03:35.296 CC test/lvol/esnap/esnap.o 00:03:35.296 CXX test/cpp_headers/fsdev_module.o 00:03:35.296 CC test/nvme/simple_copy/simple_copy.o 00:03:35.554 LINK pmr_persistence 00:03:35.554 CXX test/cpp_headers/ftl.o 00:03:35.554 LINK mkfs 00:03:35.554 LINK reserve 00:03:35.554 CXX test/cpp_headers/fuse_dispatcher.o 00:03:35.554 CXX test/cpp_headers/gpt_spec.o 00:03:35.554 LINK abort 00:03:35.554 LINK simple_copy 00:03:35.813 CC test/nvme/connect_stress/connect_stress.o 00:03:35.813 CC test/nvme/boot_partition/boot_partition.o 00:03:35.813 CXX test/cpp_headers/hexlify.o 00:03:35.813 CXX test/cpp_headers/histogram_data.o 00:03:35.813 CC test/nvme/compliance/nvme_compliance.o 00:03:35.813 CC test/nvme/fused_ordering/fused_ordering.o 00:03:35.813 LINK boot_partition 00:03:35.813 LINK connect_stress 00:03:35.813 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:35.813 CXX test/cpp_headers/idxd.o 00:03:35.813 LINK dif 00:03:35.813 CXX test/cpp_headers/idxd_spec.o 00:03:36.071 CXX test/cpp_headers/init.o 00:03:36.071 CXX test/cpp_headers/ioat.o 00:03:36.071 LINK fused_ordering 00:03:36.071 CXX test/cpp_headers/ioat_spec.o 00:03:36.071 LINK doorbell_aers 00:03:36.071 CC test/nvme/fdp/fdp.o 00:03:36.071 LINK bdevperf 00:03:36.071 LINK nvme_compliance 00:03:36.071 CXX test/cpp_headers/iscsi_spec.o 00:03:36.071 CXX test/cpp_headers/json.o 00:03:36.071 CXX test/cpp_headers/jsonrpc.o 00:03:36.071 CC test/nvme/cuse/cuse.o 00:03:36.071 CXX test/cpp_headers/keyring.o 00:03:36.329 CXX test/cpp_headers/keyring_module.o 00:03:36.329 CXX test/cpp_headers/likely.o 00:03:36.329 CC test/bdev/bdevio/bdevio.o 00:03:36.329 CXX test/cpp_headers/log.o 00:03:36.329 CXX test/cpp_headers/lvol.o 00:03:36.329 CXX test/cpp_headers/md5.o 00:03:36.329 CXX test/cpp_headers/memory.o 00:03:36.329 CXX test/cpp_headers/mmio.o 00:03:36.329 LINK fdp 00:03:36.329 CXX test/cpp_headers/nbd.o 00:03:36.329 CXX test/cpp_headers/net.o 00:03:36.329 CC examples/nvmf/nvmf/nvmf.o 00:03:36.329 CXX test/cpp_headers/notify.o 00:03:36.594 CXX test/cpp_headers/nvme.o 00:03:36.594 CXX test/cpp_headers/nvme_intel.o 00:03:36.594 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.594 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.594 CXX test/cpp_headers/nvme_spec.o 00:03:36.594 CXX test/cpp_headers/nvme_zns.o 00:03:36.594 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.594 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.594 LINK nvmf 00:03:36.594 LINK bdevio 00:03:36.594 CXX test/cpp_headers/nvmf.o 00:03:36.594 CXX test/cpp_headers/nvmf_spec.o 00:03:36.594 CXX test/cpp_headers/nvmf_transport.o 00:03:36.594 CXX test/cpp_headers/opal.o 00:03:36.852 CXX test/cpp_headers/opal_spec.o 00:03:36.852 CXX test/cpp_headers/pci_ids.o 00:03:36.852 CXX test/cpp_headers/pipe.o 00:03:36.852 CXX test/cpp_headers/queue.o 00:03:36.852 CXX test/cpp_headers/reduce.o 00:03:36.852 CXX test/cpp_headers/rpc.o 00:03:36.852 CXX test/cpp_headers/scheduler.o 00:03:36.852 CXX test/cpp_headers/scsi.o 00:03:36.852 CXX test/cpp_headers/scsi_spec.o 00:03:36.852 CXX test/cpp_headers/sock.o 00:03:36.852 CXX test/cpp_headers/stdinc.o 00:03:36.852 CXX test/cpp_headers/string.o 00:03:36.852 CXX test/cpp_headers/thread.o 00:03:36.852 CXX test/cpp_headers/trace.o 00:03:36.852 CXX test/cpp_headers/trace_parser.o 00:03:36.852 CXX test/cpp_headers/tree.o 00:03:37.111 CXX test/cpp_headers/ublk.o 00:03:37.111 CXX test/cpp_headers/util.o 00:03:37.111 CXX test/cpp_headers/uuid.o 00:03:37.111 CXX test/cpp_headers/version.o 00:03:37.111 CXX test/cpp_headers/vfio_user_pci.o 00:03:37.111 CXX test/cpp_headers/vfio_user_spec.o 00:03:37.111 CXX test/cpp_headers/vhost.o 00:03:37.111 CXX test/cpp_headers/vmd.o 00:03:37.111 CXX test/cpp_headers/xor.o 00:03:37.111 CXX test/cpp_headers/zipf.o 00:03:37.369 LINK cuse 00:03:39.905 LINK esnap 00:03:40.164 00:03:40.164 real 1m6.571s 00:03:40.164 user 6m10.823s 00:03:40.164 sys 1m8.063s 00:03:40.164 23:47:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:40.164 23:47:46 make -- common/autotest_common.sh@10 -- $ set +x 00:03:40.164 ************************************ 00:03:40.164 END TEST make 00:03:40.164 ************************************ 00:03:40.164 23:47:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:40.164 23:47:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:40.164 23:47:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:40.164 23:47:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.164 23:47:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:40.164 23:47:46 -- pm/common@44 -- $ pid=5077 00:03:40.164 23:47:46 -- pm/common@50 -- $ kill -TERM 5077 00:03:40.164 23:47:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.164 23:47:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:40.164 23:47:46 -- pm/common@44 -- $ pid=5078 00:03:40.164 23:47:46 -- pm/common@50 -- $ kill -TERM 5078 00:03:40.164 23:47:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:40.164 23:47:46 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:40.164 23:47:46 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:40.164 23:47:46 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:40.164 23:47:46 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:40.164 23:47:46 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:40.164 23:47:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.164 23:47:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.164 23:47:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.164 23:47:46 -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.164 23:47:46 -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.164 23:47:46 -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.164 23:47:46 -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.164 23:47:46 -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.164 23:47:46 -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.164 23:47:46 -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.164 23:47:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.164 23:47:46 -- scripts/common.sh@344 -- # case "$op" in 00:03:40.164 23:47:46 -- scripts/common.sh@345 -- # : 1 00:03:40.164 23:47:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.164 23:47:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.164 23:47:46 -- scripts/common.sh@365 -- # decimal 1 00:03:40.164 23:47:46 -- scripts/common.sh@353 -- # local d=1 00:03:40.164 23:47:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.164 23:47:46 -- scripts/common.sh@355 -- # echo 1 00:03:40.164 23:47:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.164 23:47:46 -- scripts/common.sh@366 -- # decimal 2 00:03:40.164 23:47:46 -- scripts/common.sh@353 -- # local d=2 00:03:40.164 23:47:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.164 23:47:46 -- scripts/common.sh@355 -- # echo 2 00:03:40.164 23:47:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.164 23:47:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.164 23:47:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.164 23:47:46 -- scripts/common.sh@368 -- # return 0 00:03:40.164 23:47:46 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.164 23:47:46 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:40.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.164 --rc genhtml_branch_coverage=1 00:03:40.164 --rc genhtml_function_coverage=1 00:03:40.164 --rc genhtml_legend=1 00:03:40.164 --rc geninfo_all_blocks=1 00:03:40.164 --rc geninfo_unexecuted_blocks=1 00:03:40.164 00:03:40.164 ' 00:03:40.164 23:47:46 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:40.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.164 --rc genhtml_branch_coverage=1 00:03:40.164 --rc genhtml_function_coverage=1 00:03:40.164 --rc genhtml_legend=1 00:03:40.164 --rc geninfo_all_blocks=1 00:03:40.164 --rc geninfo_unexecuted_blocks=1 00:03:40.164 00:03:40.164 ' 00:03:40.164 23:47:46 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:40.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.164 --rc genhtml_branch_coverage=1 00:03:40.164 --rc genhtml_function_coverage=1 00:03:40.164 --rc genhtml_legend=1 00:03:40.164 --rc geninfo_all_blocks=1 00:03:40.164 --rc geninfo_unexecuted_blocks=1 00:03:40.164 00:03:40.164 ' 00:03:40.164 23:47:46 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:40.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.164 --rc genhtml_branch_coverage=1 00:03:40.164 --rc genhtml_function_coverage=1 00:03:40.165 --rc genhtml_legend=1 00:03:40.165 --rc geninfo_all_blocks=1 00:03:40.165 --rc geninfo_unexecuted_blocks=1 00:03:40.165 00:03:40.165 ' 00:03:40.165 23:47:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:40.165 23:47:46 -- nvmf/common.sh@7 -- # uname -s 00:03:40.423 23:47:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.423 23:47:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.424 23:47:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.424 23:47:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.424 23:47:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.424 23:47:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.424 23:47:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.424 23:47:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.424 23:47:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.424 23:47:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.424 23:47:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7461d676-225a-4a53-b691-d62ceecf7cb1 00:03:40.424 23:47:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=7461d676-225a-4a53-b691-d62ceecf7cb1 00:03:40.424 23:47:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.424 23:47:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.424 23:47:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:40.424 23:47:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:40.424 23:47:46 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:40.424 23:47:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:40.424 23:47:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.424 23:47:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.424 23:47:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.424 23:47:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.424 23:47:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.424 23:47:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.424 23:47:46 -- paths/export.sh@5 -- # export PATH 00:03:40.424 23:47:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.424 23:47:46 -- nvmf/common.sh@51 -- # : 0 00:03:40.424 23:47:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:40.424 23:47:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:40.424 23:47:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:40.424 23:47:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.424 23:47:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.424 23:47:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:40.424 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:40.424 23:47:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:40.424 23:47:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:40.424 23:47:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:40.424 23:47:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:40.424 23:47:46 -- spdk/autotest.sh@32 -- # uname -s 00:03:40.424 23:47:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:40.424 23:47:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:40.424 23:47:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.424 23:47:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:40.424 23:47:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.424 23:47:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.424 23:47:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.424 23:47:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.424 23:47:46 -- spdk/autotest.sh@48 -- # udevadm_pid=54276 00:03:40.424 23:47:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.424 23:47:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:40.424 23:47:46 -- pm/common@17 -- # local monitor 00:03:40.424 23:47:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.424 23:47:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.424 23:47:46 -- pm/common@25 -- # sleep 1 00:03:40.424 23:47:46 -- pm/common@21 -- # date +%s 00:03:40.424 23:47:46 -- pm/common@21 -- # date +%s 00:03:40.424 23:47:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731973666 00:03:40.424 23:47:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731973666 00:03:40.424 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731973666_collect-cpu-load.pm.log 00:03:40.424 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731973666_collect-vmstat.pm.log 00:03:41.436 23:47:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:41.436 23:47:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:41.436 23:47:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.436 23:47:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.436 23:47:47 -- spdk/autotest.sh@59 -- # create_test_list 00:03:41.436 23:47:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:41.436 23:47:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.436 23:47:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:41.436 23:47:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:41.436 23:47:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:41.436 23:47:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:41.436 23:47:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:41.437 23:47:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.437 23:47:47 -- common/autotest_common.sh@1457 -- # uname 00:03:41.437 23:47:47 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:41.437 23:47:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.437 23:47:47 -- common/autotest_common.sh@1477 -- # uname 00:03:41.437 23:47:47 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:41.437 23:47:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:41.437 23:47:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:41.437 lcov: LCOV version 1.15 00:03:41.437 23:47:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:56.432 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:56.432 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.338 23:48:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:11.338 23:48:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.338 23:48:17 -- common/autotest_common.sh@10 -- # set +x 00:04:11.338 23:48:17 -- spdk/autotest.sh@78 -- # rm -f 00:04:11.338 23:48:17 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.910 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:11.910 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:11.910 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:11.910 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:11.910 23:48:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:11.910 23:48:18 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:11.910 23:48:18 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:11.910 23:48:18 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:11.910 23:48:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:11.910 23:48:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:11.910 23:48:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:11.910 23:48:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:11.910 23:48:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:11.910 23:48:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:11.910 23:48:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:11.910 23:48:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n2 00:04:11.910 23:48:18 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:04:11.910 23:48:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:11.910 23:48:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n3 00:04:11.910 23:48:18 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:04:11.910 23:48:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:11.910 23:48:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.910 23:48:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:11.910 23:48:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.910 23:48:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:11.910 23:48:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:11.910 23:48:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:11.910 23:48:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:11.910 No valid GPT data, bailing 00:04:11.910 23:48:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:11.910 23:48:18 -- scripts/common.sh@394 -- # pt= 00:04:11.910 23:48:18 -- scripts/common.sh@395 -- # return 1 00:04:11.910 23:48:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:11.910 1+0 records in 00:04:11.910 1+0 records out 00:04:11.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289688 s, 36.2 MB/s 00:04:11.910 23:48:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.910 23:48:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:11.910 23:48:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:11.910 23:48:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:11.910 23:48:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:11.910 No valid GPT data, bailing 00:04:11.910 23:48:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:12.172 23:48:18 -- scripts/common.sh@394 -- # pt= 00:04:12.172 23:48:18 -- scripts/common.sh@395 -- # return 1 00:04:12.172 23:48:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:12.172 1+0 records in 00:04:12.172 1+0 records out 00:04:12.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00602473 s, 174 MB/s 00:04:12.172 23:48:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.172 23:48:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.172 23:48:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:12.172 23:48:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:12.172 23:48:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:12.172 No valid GPT data, bailing 00:04:12.172 23:48:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:12.172 23:48:18 -- scripts/common.sh@394 -- # pt= 00:04:12.172 23:48:18 -- scripts/common.sh@395 -- # return 1 00:04:12.172 23:48:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:12.172 1+0 records in 00:04:12.172 1+0 records out 00:04:12.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00606428 s, 173 MB/s 00:04:12.172 23:48:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.172 23:48:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.172 23:48:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:12.172 23:48:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:12.172 23:48:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:12.172 No valid GPT data, bailing 00:04:12.172 23:48:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:12.172 23:48:18 -- scripts/common.sh@394 -- # pt= 00:04:12.172 23:48:18 -- scripts/common.sh@395 -- # return 1 00:04:12.172 23:48:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:12.172 1+0 records in 00:04:12.172 1+0 records out 00:04:12.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616154 s, 170 MB/s 00:04:12.172 23:48:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.172 23:48:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.172 23:48:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:04:12.172 23:48:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:04:12.172 23:48:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:12.172 No valid GPT data, bailing 00:04:12.172 23:48:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:12.434 23:48:18 -- scripts/common.sh@394 -- # pt= 00:04:12.434 23:48:18 -- scripts/common.sh@395 -- # return 1 00:04:12.434 23:48:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:12.434 1+0 records in 00:04:12.434 1+0 records out 00:04:12.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521296 s, 201 MB/s 00:04:12.434 23:48:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.434 23:48:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.434 23:48:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:04:12.434 23:48:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:04:12.434 23:48:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:12.434 No valid GPT data, bailing 00:04:12.434 23:48:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:12.434 23:48:18 -- scripts/common.sh@394 -- # pt= 00:04:12.434 23:48:18 -- scripts/common.sh@395 -- # return 1 00:04:12.434 23:48:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:12.434 1+0 records in 00:04:12.434 1+0 records out 00:04:12.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503105 s, 208 MB/s 00:04:12.434 23:48:18 -- spdk/autotest.sh@105 -- # sync 00:04:12.698 23:48:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.698 23:48:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.698 23:48:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.617 23:48:21 -- spdk/autotest.sh@111 -- # uname -s 00:04:14.617 23:48:21 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:14.617 23:48:21 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:14.617 23:48:21 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:14.879 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.451 Hugepages 00:04:15.451 node hugesize free / total 00:04:15.451 node0 1048576kB 0 / 0 00:04:15.451 node0 2048kB 0 / 0 00:04:15.451 00:04:15.451 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.451 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:15.713 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:15.713 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:15.713 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:15.713 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:15.713 23:48:22 -- spdk/autotest.sh@117 -- # uname -s 00:04:15.713 23:48:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:15.713 23:48:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:15.713 23:48:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.858 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.858 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.858 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.858 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.858 23:48:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:18.247 23:48:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:18.247 23:48:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:18.247 23:48:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:18.247 23:48:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:18.247 23:48:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:18.247 23:48:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:18.247 23:48:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.247 23:48:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.247 23:48:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:18.247 23:48:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:18.247 23:48:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:18.247 23:48:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.508 Waiting for block devices as requested 00:04:18.508 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.769 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.769 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.769 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.118 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:24.118 23:48:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:24.118 23:48:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:24.118 23:48:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:24.118 23:48:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:24.118 23:48:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:24.118 23:48:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:24.118 23:48:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:24.118 23:48:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:24.118 23:48:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1543 -- # continue 00:04:24.118 23:48:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:24.118 23:48:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:24.118 23:48:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:24.118 23:48:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:24.118 23:48:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1543 -- # continue 00:04:24.118 23:48:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:24.118 23:48:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:24.118 23:48:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:24.118 23:48:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:24.118 23:48:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1543 -- # continue 00:04:24.118 23:48:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:24.118 23:48:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:24.118 23:48:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:24.118 23:48:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:24.118 23:48:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:24.118 23:48:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:24.118 23:48:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:24.118 23:48:30 -- common/autotest_common.sh@1543 -- # continue 00:04:24.118 23:48:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:24.118 23:48:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.118 23:48:30 -- common/autotest_common.sh@10 -- # set +x 00:04:24.118 23:48:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:24.118 23:48:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.118 23:48:30 -- common/autotest_common.sh@10 -- # set +x 00:04:24.118 23:48:30 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.265 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.265 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.265 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.265 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.265 23:48:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:25.265 23:48:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.265 23:48:31 -- common/autotest_common.sh@10 -- # set +x 00:04:25.527 23:48:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:25.527 23:48:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:25.527 23:48:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.527 23:48:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:25.527 23:48:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:25.527 23:48:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:25.527 23:48:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:25.527 23:48:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:25.527 23:48:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:25.527 23:48:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:25.527 23:48:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.527 23:48:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:25.527 23:48:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:25.527 23:48:32 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:25.527 23:48:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:25.527 23:48:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:25.527 23:48:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:25.527 23:48:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:25.527 23:48:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:25.527 23:48:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:25.527 23:48:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:25.527 23:48:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:25.527 23:48:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:25.527 23:48:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:25.527 23:48:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:25.527 23:48:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:25.527 23:48:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:25.527 23:48:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:25.527 23:48:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:25.527 23:48:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:25.527 23:48:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:25.527 23:48:32 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:25.527 23:48:32 -- common/autotest_common.sh@1572 -- # return 0 00:04:25.527 23:48:32 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:25.527 23:48:32 -- common/autotest_common.sh@1580 -- # return 0 00:04:25.527 23:48:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:25.527 23:48:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:25.527 23:48:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:25.527 23:48:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:25.527 23:48:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:25.527 23:48:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.527 23:48:32 -- common/autotest_common.sh@10 -- # set +x 00:04:25.527 23:48:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:25.527 23:48:32 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:25.527 23:48:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.527 23:48:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.527 23:48:32 -- common/autotest_common.sh@10 -- # set +x 00:04:25.527 ************************************ 00:04:25.527 START TEST env 00:04:25.527 ************************************ 00:04:25.527 23:48:32 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:25.527 * Looking for test storage... 00:04:25.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:25.527 23:48:32 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.527 23:48:32 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.527 23:48:32 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.789 23:48:32 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.789 23:48:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.789 23:48:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.789 23:48:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.789 23:48:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.789 23:48:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.789 23:48:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.789 23:48:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.789 23:48:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.789 23:48:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.789 23:48:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.789 23:48:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.789 23:48:32 env -- scripts/common.sh@344 -- # case "$op" in 00:04:25.789 23:48:32 env -- scripts/common.sh@345 -- # : 1 00:04:25.789 23:48:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.789 23:48:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.789 23:48:32 env -- scripts/common.sh@365 -- # decimal 1 00:04:25.789 23:48:32 env -- scripts/common.sh@353 -- # local d=1 00:04:25.789 23:48:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.789 23:48:32 env -- scripts/common.sh@355 -- # echo 1 00:04:25.789 23:48:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.789 23:48:32 env -- scripts/common.sh@366 -- # decimal 2 00:04:25.789 23:48:32 env -- scripts/common.sh@353 -- # local d=2 00:04:25.789 23:48:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.789 23:48:32 env -- scripts/common.sh@355 -- # echo 2 00:04:25.789 23:48:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.789 23:48:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.789 23:48:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.789 23:48:32 env -- scripts/common.sh@368 -- # return 0 00:04:25.789 23:48:32 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.789 23:48:32 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.789 --rc genhtml_branch_coverage=1 00:04:25.789 --rc genhtml_function_coverage=1 00:04:25.789 --rc genhtml_legend=1 00:04:25.789 --rc geninfo_all_blocks=1 00:04:25.789 --rc geninfo_unexecuted_blocks=1 00:04:25.789 00:04:25.789 ' 00:04:25.789 23:48:32 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.789 --rc genhtml_branch_coverage=1 00:04:25.789 --rc genhtml_function_coverage=1 00:04:25.789 --rc genhtml_legend=1 00:04:25.789 --rc geninfo_all_blocks=1 00:04:25.789 --rc geninfo_unexecuted_blocks=1 00:04:25.789 00:04:25.789 ' 00:04:25.789 23:48:32 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.789 --rc genhtml_branch_coverage=1 00:04:25.789 --rc genhtml_function_coverage=1 00:04:25.789 --rc genhtml_legend=1 00:04:25.789 --rc geninfo_all_blocks=1 00:04:25.789 --rc geninfo_unexecuted_blocks=1 00:04:25.789 00:04:25.789 ' 00:04:25.789 23:48:32 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.789 --rc genhtml_branch_coverage=1 00:04:25.789 --rc genhtml_function_coverage=1 00:04:25.789 --rc genhtml_legend=1 00:04:25.789 --rc geninfo_all_blocks=1 00:04:25.789 --rc geninfo_unexecuted_blocks=1 00:04:25.789 00:04:25.789 ' 00:04:25.789 23:48:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:25.789 23:48:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.789 23:48:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.789 23:48:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.789 ************************************ 00:04:25.789 START TEST env_memory 00:04:25.789 ************************************ 00:04:25.789 23:48:32 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:25.789 00:04:25.789 00:04:25.789 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.789 http://cunit.sourceforge.net/ 00:04:25.789 00:04:25.789 00:04:25.789 Suite: memory 00:04:25.789 Test: alloc and free memory map ...[2024-11-18 23:48:32.315326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:25.789 passed 00:04:25.789 Test: mem map translation ...[2024-11-18 23:48:32.354576] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:25.789 [2024-11-18 23:48:32.354724] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:25.789 [2024-11-18 23:48:32.354840] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:25.789 [2024-11-18 23:48:32.354881] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:25.789 passed 00:04:25.789 Test: mem map registration ...[2024-11-18 23:48:32.423272] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:25.789 [2024-11-18 23:48:32.423411] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:25.789 passed 00:04:26.051 Test: mem map adjacent registrations ...passed 00:04:26.051 00:04:26.051 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.051 suites 1 1 n/a 0 0 00:04:26.051 tests 4 4 4 0 0 00:04:26.051 asserts 152 152 152 0 n/a 00:04:26.051 00:04:26.051 Elapsed time = 0.235 seconds 00:04:26.051 00:04:26.051 real 0m0.279s 00:04:26.051 user 0m0.243s 00:04:26.051 sys 0m0.025s 00:04:26.051 23:48:32 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.051 ************************************ 00:04:26.051 END TEST env_memory 00:04:26.051 23:48:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:26.051 ************************************ 00:04:26.051 23:48:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:26.051 23:48:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.051 23:48:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.051 23:48:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.051 ************************************ 00:04:26.051 START TEST env_vtophys 00:04:26.051 ************************************ 00:04:26.051 23:48:32 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:26.051 EAL: lib.eal log level changed from notice to debug 00:04:26.051 EAL: Detected lcore 0 as core 0 on socket 0 00:04:26.051 EAL: Detected lcore 1 as core 0 on socket 0 00:04:26.051 EAL: Detected lcore 2 as core 0 on socket 0 00:04:26.051 EAL: Detected lcore 3 as core 0 on socket 0 00:04:26.051 EAL: Detected lcore 4 as core 0 on socket 0 00:04:26.051 EAL: Detected lcore 5 as core 0 on socket 0 00:04:26.051 EAL: Detected lcore 6 as core 0 on socket 0 00:04:26.051 EAL: Detected lcore 7 as core 0 on socket 0 00:04:26.051 EAL: Detected lcore 8 as core 0 on socket 0 00:04:26.051 EAL: Detected lcore 9 as core 0 on socket 0 00:04:26.051 EAL: Maximum logical cores by configuration: 128 00:04:26.051 EAL: Detected CPU lcores: 10 00:04:26.051 EAL: Detected NUMA nodes: 1 00:04:26.051 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:26.051 EAL: Detected shared linkage of DPDK 00:04:26.051 EAL: No shared files mode enabled, IPC will be disabled 00:04:26.051 EAL: Selected IOVA mode 'PA' 00:04:26.051 EAL: Probing VFIO support... 00:04:26.051 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:26.051 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:26.051 EAL: Ask a virtual area of 0x2e000 bytes 00:04:26.051 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:26.051 EAL: Setting up physically contiguous memory... 00:04:26.051 EAL: Setting maximum number of open files to 524288 00:04:26.051 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:26.051 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:26.051 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.051 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:26.051 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.051 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.051 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:26.051 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:26.051 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.051 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:26.051 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.051 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.051 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:26.051 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:26.051 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.052 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:26.052 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.052 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.052 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:26.052 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:26.052 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.052 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:26.052 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.052 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.052 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:26.052 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:26.052 EAL: Hugepages will be freed exactly as allocated. 00:04:26.052 EAL: No shared files mode enabled, IPC is disabled 00:04:26.052 EAL: No shared files mode enabled, IPC is disabled 00:04:26.312 EAL: TSC frequency is ~2600000 KHz 00:04:26.312 EAL: Main lcore 0 is ready (tid=7f9a0dccba40;cpuset=[0]) 00:04:26.312 EAL: Trying to obtain current memory policy. 00:04:26.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.312 EAL: Restoring previous memory policy: 0 00:04:26.312 EAL: request: mp_malloc_sync 00:04:26.312 EAL: No shared files mode enabled, IPC is disabled 00:04:26.312 EAL: Heap on socket 0 was expanded by 2MB 00:04:26.312 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:26.312 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:26.312 EAL: Mem event callback 'spdk:(nil)' registered 00:04:26.312 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:26.312 00:04:26.312 00:04:26.312 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.312 http://cunit.sourceforge.net/ 00:04:26.312 00:04:26.312 00:04:26.312 Suite: components_suite 00:04:26.572 Test: vtophys_malloc_test ...passed 00:04:26.572 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:26.572 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.572 EAL: Restoring previous memory policy: 4 00:04:26.572 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.572 EAL: request: mp_malloc_sync 00:04:26.572 EAL: No shared files mode enabled, IPC is disabled 00:04:26.572 EAL: Heap on socket 0 was expanded by 4MB 00:04:26.572 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.572 EAL: request: mp_malloc_sync 00:04:26.572 EAL: No shared files mode enabled, IPC is disabled 00:04:26.572 EAL: Heap on socket 0 was shrunk by 4MB 00:04:26.572 EAL: Trying to obtain current memory policy. 00:04:26.572 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.572 EAL: Restoring previous memory policy: 4 00:04:26.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.573 EAL: request: mp_malloc_sync 00:04:26.573 EAL: No shared files mode enabled, IPC is disabled 00:04:26.573 EAL: Heap on socket 0 was expanded by 6MB 00:04:26.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.573 EAL: request: mp_malloc_sync 00:04:26.573 EAL: No shared files mode enabled, IPC is disabled 00:04:26.573 EAL: Heap on socket 0 was shrunk by 6MB 00:04:26.573 EAL: Trying to obtain current memory policy. 00:04:26.573 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.573 EAL: Restoring previous memory policy: 4 00:04:26.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.573 EAL: request: mp_malloc_sync 00:04:26.573 EAL: No shared files mode enabled, IPC is disabled 00:04:26.573 EAL: Heap on socket 0 was expanded by 10MB 00:04:26.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.573 EAL: request: mp_malloc_sync 00:04:26.573 EAL: No shared files mode enabled, IPC is disabled 00:04:26.573 EAL: Heap on socket 0 was shrunk by 10MB 00:04:26.573 EAL: Trying to obtain current memory policy. 00:04:26.573 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.573 EAL: Restoring previous memory policy: 4 00:04:26.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.573 EAL: request: mp_malloc_sync 00:04:26.573 EAL: No shared files mode enabled, IPC is disabled 00:04:26.573 EAL: Heap on socket 0 was expanded by 18MB 00:04:26.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.834 EAL: request: mp_malloc_sync 00:04:26.834 EAL: No shared files mode enabled, IPC is disabled 00:04:26.834 EAL: Heap on socket 0 was shrunk by 18MB 00:04:26.834 EAL: Trying to obtain current memory policy. 00:04:26.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.834 EAL: Restoring previous memory policy: 4 00:04:26.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.834 EAL: request: mp_malloc_sync 00:04:26.834 EAL: No shared files mode enabled, IPC is disabled 00:04:26.834 EAL: Heap on socket 0 was expanded by 34MB 00:04:26.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.834 EAL: request: mp_malloc_sync 00:04:26.834 EAL: No shared files mode enabled, IPC is disabled 00:04:26.834 EAL: Heap on socket 0 was shrunk by 34MB 00:04:26.834 EAL: Trying to obtain current memory policy. 00:04:26.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.834 EAL: Restoring previous memory policy: 4 00:04:26.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.834 EAL: request: mp_malloc_sync 00:04:26.834 EAL: No shared files mode enabled, IPC is disabled 00:04:26.834 EAL: Heap on socket 0 was expanded by 66MB 00:04:26.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.834 EAL: request: mp_malloc_sync 00:04:26.834 EAL: No shared files mode enabled, IPC is disabled 00:04:26.834 EAL: Heap on socket 0 was shrunk by 66MB 00:04:27.095 EAL: Trying to obtain current memory policy. 00:04:27.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.095 EAL: Restoring previous memory policy: 4 00:04:27.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.095 EAL: request: mp_malloc_sync 00:04:27.095 EAL: No shared files mode enabled, IPC is disabled 00:04:27.095 EAL: Heap on socket 0 was expanded by 130MB 00:04:27.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.095 EAL: request: mp_malloc_sync 00:04:27.095 EAL: No shared files mode enabled, IPC is disabled 00:04:27.095 EAL: Heap on socket 0 was shrunk by 130MB 00:04:27.356 EAL: Trying to obtain current memory policy. 00:04:27.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.356 EAL: Restoring previous memory policy: 4 00:04:27.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.356 EAL: request: mp_malloc_sync 00:04:27.356 EAL: No shared files mode enabled, IPC is disabled 00:04:27.356 EAL: Heap on socket 0 was expanded by 258MB 00:04:27.617 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.617 EAL: request: mp_malloc_sync 00:04:27.617 EAL: No shared files mode enabled, IPC is disabled 00:04:27.617 EAL: Heap on socket 0 was shrunk by 258MB 00:04:27.878 EAL: Trying to obtain current memory policy. 00:04:27.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.138 EAL: Restoring previous memory policy: 4 00:04:28.138 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.138 EAL: request: mp_malloc_sync 00:04:28.138 EAL: No shared files mode enabled, IPC is disabled 00:04:28.138 EAL: Heap on socket 0 was expanded by 514MB 00:04:28.710 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.710 EAL: request: mp_malloc_sync 00:04:28.710 EAL: No shared files mode enabled, IPC is disabled 00:04:28.710 EAL: Heap on socket 0 was shrunk by 514MB 00:04:29.283 EAL: Trying to obtain current memory policy. 00:04:29.283 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.283 EAL: Restoring previous memory policy: 4 00:04:29.283 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.283 EAL: request: mp_malloc_sync 00:04:29.283 EAL: No shared files mode enabled, IPC is disabled 00:04:29.283 EAL: Heap on socket 0 was expanded by 1026MB 00:04:30.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.226 EAL: request: mp_malloc_sync 00:04:30.226 EAL: No shared files mode enabled, IPC is disabled 00:04:30.226 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.170 passed 00:04:31.170 00:04:31.170 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.170 suites 1 1 n/a 0 0 00:04:31.170 tests 2 2 2 0 0 00:04:31.170 asserts 5782 5782 5782 0 n/a 00:04:31.170 00:04:31.170 Elapsed time = 4.784 seconds 00:04:31.170 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.170 EAL: request: mp_malloc_sync 00:04:31.170 EAL: No shared files mode enabled, IPC is disabled 00:04:31.170 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.170 EAL: No shared files mode enabled, IPC is disabled 00:04:31.170 EAL: No shared files mode enabled, IPC is disabled 00:04:31.170 EAL: No shared files mode enabled, IPC is disabled 00:04:31.170 00:04:31.170 real 0m5.049s 00:04:31.170 user 0m4.098s 00:04:31.170 sys 0m0.801s 00:04:31.170 23:48:37 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.170 23:48:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.170 ************************************ 00:04:31.170 END TEST env_vtophys 00:04:31.170 ************************************ 00:04:31.170 23:48:37 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.170 23:48:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.170 23:48:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.170 23:48:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.170 ************************************ 00:04:31.170 START TEST env_pci 00:04:31.170 ************************************ 00:04:31.170 23:48:37 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.170 00:04:31.170 00:04:31.170 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.170 http://cunit.sourceforge.net/ 00:04:31.170 00:04:31.170 00:04:31.170 Suite: pci 00:04:31.170 Test: pci_hook ...[2024-11-18 23:48:37.738714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57046 has claimed it 00:04:31.170 passed 00:04:31.170 00:04:31.170 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.170 suites 1 1 n/a 0 0 00:04:31.170 tests 1 1 1 0 0 00:04:31.170 asserts 25 25 25 0 n/a 00:04:31.170 00:04:31.170 Elapsed time = 0.007 seconds 00:04:31.170 EAL: Cannot find device (10000:00:01.0) 00:04:31.170 EAL: Failed to attach device on primary process 00:04:31.170 00:04:31.170 real 0m0.058s 00:04:31.170 user 0m0.017s 00:04:31.170 sys 0m0.040s 00:04:31.170 23:48:37 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.170 ************************************ 00:04:31.170 END TEST env_pci 00:04:31.170 ************************************ 00:04:31.170 23:48:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.170 23:48:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.170 23:48:37 env -- env/env.sh@15 -- # uname 00:04:31.170 23:48:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.170 23:48:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.170 23:48:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.170 23:48:37 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:31.170 23:48:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.170 23:48:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.170 ************************************ 00:04:31.170 START TEST env_dpdk_post_init 00:04:31.170 ************************************ 00:04:31.170 23:48:37 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.431 EAL: Detected CPU lcores: 10 00:04:31.431 EAL: Detected NUMA nodes: 1 00:04:31.431 EAL: Detected shared linkage of DPDK 00:04:31.431 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.431 EAL: Selected IOVA mode 'PA' 00:04:31.431 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:31.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:31.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:31.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:31.431 Starting DPDK initialization... 00:04:31.431 Starting SPDK post initialization... 00:04:31.431 SPDK NVMe probe 00:04:31.431 Attaching to 0000:00:10.0 00:04:31.431 Attaching to 0000:00:11.0 00:04:31.431 Attaching to 0000:00:12.0 00:04:31.431 Attaching to 0000:00:13.0 00:04:31.431 Attached to 0000:00:10.0 00:04:31.431 Attached to 0000:00:11.0 00:04:31.431 Attached to 0000:00:13.0 00:04:31.431 Attached to 0000:00:12.0 00:04:31.431 Cleaning up... 00:04:31.431 00:04:31.431 real 0m0.239s 00:04:31.431 user 0m0.079s 00:04:31.431 sys 0m0.063s 00:04:31.431 23:48:38 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.431 ************************************ 00:04:31.431 23:48:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.431 END TEST env_dpdk_post_init 00:04:31.431 ************************************ 00:04:31.431 23:48:38 env -- env/env.sh@26 -- # uname 00:04:31.431 23:48:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:31.431 23:48:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.431 23:48:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.431 23:48:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.431 23:48:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.693 ************************************ 00:04:31.693 START TEST env_mem_callbacks 00:04:31.693 ************************************ 00:04:31.694 23:48:38 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.694 EAL: Detected CPU lcores: 10 00:04:31.694 EAL: Detected NUMA nodes: 1 00:04:31.694 EAL: Detected shared linkage of DPDK 00:04:31.694 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.694 EAL: Selected IOVA mode 'PA' 00:04:31.694 00:04:31.694 00:04:31.694 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.694 http://cunit.sourceforge.net/ 00:04:31.694 00:04:31.694 00:04:31.694 Suite: memory 00:04:31.694 Test: test ... 00:04:31.694 register 0x200000200000 2097152 00:04:31.694 malloc 3145728 00:04:31.694 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.694 register 0x200000400000 4194304 00:04:31.694 buf 0x2000004fffc0 len 3145728 PASSED 00:04:31.694 malloc 64 00:04:31.694 buf 0x2000004ffec0 len 64 PASSED 00:04:31.694 malloc 4194304 00:04:31.694 register 0x200000800000 6291456 00:04:31.694 buf 0x2000009fffc0 len 4194304 PASSED 00:04:31.694 free 0x2000004fffc0 3145728 00:04:31.694 free 0x2000004ffec0 64 00:04:31.694 unregister 0x200000400000 4194304 PASSED 00:04:31.694 free 0x2000009fffc0 4194304 00:04:31.694 unregister 0x200000800000 6291456 PASSED 00:04:31.694 malloc 8388608 00:04:31.694 register 0x200000400000 10485760 00:04:31.694 buf 0x2000005fffc0 len 8388608 PASSED 00:04:31.694 free 0x2000005fffc0 8388608 00:04:31.694 unregister 0x200000400000 10485760 PASSED 00:04:31.694 passed 00:04:31.694 00:04:31.694 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.694 suites 1 1 n/a 0 0 00:04:31.694 tests 1 1 1 0 0 00:04:31.694 asserts 15 15 15 0 n/a 00:04:31.694 00:04:31.694 Elapsed time = 0.041 seconds 00:04:31.694 00:04:31.694 real 0m0.211s 00:04:31.694 user 0m0.058s 00:04:31.694 sys 0m0.051s 00:04:31.694 23:48:38 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.694 23:48:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:31.694 ************************************ 00:04:31.694 END TEST env_mem_callbacks 00:04:31.694 ************************************ 00:04:31.955 00:04:31.955 real 0m6.300s 00:04:31.955 user 0m4.661s 00:04:31.955 sys 0m1.194s 00:04:31.955 23:48:38 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.955 ************************************ 00:04:31.955 END TEST env 00:04:31.955 ************************************ 00:04:31.955 23:48:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.955 23:48:38 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:31.955 23:48:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.955 23:48:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.955 23:48:38 -- common/autotest_common.sh@10 -- # set +x 00:04:31.955 ************************************ 00:04:31.955 START TEST rpc 00:04:31.955 ************************************ 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:31.955 * Looking for test storage... 00:04:31.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.955 23:48:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.955 23:48:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.955 23:48:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.955 23:48:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.955 23:48:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.955 23:48:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.955 23:48:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.955 23:48:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.955 23:48:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.955 23:48:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.955 23:48:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.955 23:48:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.955 23:48:38 rpc -- scripts/common.sh@345 -- # : 1 00:04:31.955 23:48:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.955 23:48:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.955 23:48:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.955 23:48:38 rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.955 23:48:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.955 23:48:38 rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.955 23:48:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.955 23:48:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:31.955 23:48:38 rpc -- scripts/common.sh@353 -- # local d=2 00:04:31.955 23:48:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.955 23:48:38 rpc -- scripts/common.sh@355 -- # echo 2 00:04:31.955 23:48:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.955 23:48:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.955 23:48:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.955 23:48:38 rpc -- scripts/common.sh@368 -- # return 0 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.955 --rc genhtml_branch_coverage=1 00:04:31.955 --rc genhtml_function_coverage=1 00:04:31.955 --rc genhtml_legend=1 00:04:31.955 --rc geninfo_all_blocks=1 00:04:31.955 --rc geninfo_unexecuted_blocks=1 00:04:31.955 00:04:31.955 ' 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.955 --rc genhtml_branch_coverage=1 00:04:31.955 --rc genhtml_function_coverage=1 00:04:31.955 --rc genhtml_legend=1 00:04:31.955 --rc geninfo_all_blocks=1 00:04:31.955 --rc geninfo_unexecuted_blocks=1 00:04:31.955 00:04:31.955 ' 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.955 --rc genhtml_branch_coverage=1 00:04:31.955 --rc genhtml_function_coverage=1 00:04:31.955 --rc genhtml_legend=1 00:04:31.955 --rc geninfo_all_blocks=1 00:04:31.955 --rc geninfo_unexecuted_blocks=1 00:04:31.955 00:04:31.955 ' 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.955 --rc genhtml_branch_coverage=1 00:04:31.955 --rc genhtml_function_coverage=1 00:04:31.955 --rc genhtml_legend=1 00:04:31.955 --rc geninfo_all_blocks=1 00:04:31.955 --rc geninfo_unexecuted_blocks=1 00:04:31.955 00:04:31.955 ' 00:04:31.955 23:48:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57173 00:04:31.955 23:48:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.955 23:48:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57173 00:04:31.955 23:48:38 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@835 -- # '[' -z 57173 ']' 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.955 23:48:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.217 [2024-11-18 23:48:38.695530] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:32.217 [2024-11-18 23:48:38.695685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57173 ] 00:04:32.217 [2024-11-18 23:48:38.859193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.478 [2024-11-18 23:48:38.990436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.478 [2024-11-18 23:48:38.990513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57173' to capture a snapshot of events at runtime. 00:04:32.478 [2024-11-18 23:48:38.990525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:32.478 [2024-11-18 23:48:38.990536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:32.478 [2024-11-18 23:48:38.990545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57173 for offline analysis/debug. 00:04:32.478 [2024-11-18 23:48:38.991529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.048 23:48:39 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.048 23:48:39 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.048 23:48:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.048 23:48:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.048 23:48:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.048 23:48:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.048 23:48:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.048 23:48:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.048 23:48:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.048 ************************************ 00:04:33.048 START TEST rpc_integrity 00:04:33.048 ************************************ 00:04:33.048 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:33.048 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.048 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.048 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.048 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.048 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.048 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.048 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.048 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.048 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.048 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.306 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.306 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.306 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.306 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.306 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.306 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.306 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.306 { 00:04:33.306 "name": "Malloc0", 00:04:33.306 "aliases": [ 00:04:33.306 "2a19b874-3c82-47c3-8ac5-e06457915936" 00:04:33.306 ], 00:04:33.306 "product_name": "Malloc disk", 00:04:33.306 "block_size": 512, 00:04:33.306 "num_blocks": 16384, 00:04:33.306 "uuid": "2a19b874-3c82-47c3-8ac5-e06457915936", 00:04:33.306 "assigned_rate_limits": { 00:04:33.306 "rw_ios_per_sec": 0, 00:04:33.306 "rw_mbytes_per_sec": 0, 00:04:33.306 "r_mbytes_per_sec": 0, 00:04:33.306 "w_mbytes_per_sec": 0 00:04:33.306 }, 00:04:33.306 "claimed": false, 00:04:33.306 "zoned": false, 00:04:33.306 "supported_io_types": { 00:04:33.306 "read": true, 00:04:33.306 "write": true, 00:04:33.306 "unmap": true, 00:04:33.307 "flush": true, 00:04:33.307 "reset": true, 00:04:33.307 "nvme_admin": false, 00:04:33.307 "nvme_io": false, 00:04:33.307 "nvme_io_md": false, 00:04:33.307 "write_zeroes": true, 00:04:33.307 "zcopy": true, 00:04:33.307 "get_zone_info": false, 00:04:33.307 "zone_management": false, 00:04:33.307 "zone_append": false, 00:04:33.307 "compare": false, 00:04:33.307 "compare_and_write": false, 00:04:33.307 "abort": true, 00:04:33.307 "seek_hole": false, 00:04:33.307 "seek_data": false, 00:04:33.307 "copy": true, 00:04:33.307 "nvme_iov_md": false 00:04:33.307 }, 00:04:33.307 "memory_domains": [ 00:04:33.307 { 00:04:33.307 "dma_device_id": "system", 00:04:33.307 "dma_device_type": 1 00:04:33.307 }, 00:04:33.307 { 00:04:33.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.307 "dma_device_type": 2 00:04:33.307 } 00:04:33.307 ], 00:04:33.307 "driver_specific": {} 00:04:33.307 } 00:04:33.307 ]' 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.307 [2024-11-18 23:48:39.796273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.307 [2024-11-18 23:48:39.796327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.307 [2024-11-18 23:48:39.796353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:33.307 [2024-11-18 23:48:39.796365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.307 [2024-11-18 23:48:39.798544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.307 [2024-11-18 23:48:39.798675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.307 Passthru0 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.307 { 00:04:33.307 "name": "Malloc0", 00:04:33.307 "aliases": [ 00:04:33.307 "2a19b874-3c82-47c3-8ac5-e06457915936" 00:04:33.307 ], 00:04:33.307 "product_name": "Malloc disk", 00:04:33.307 "block_size": 512, 00:04:33.307 "num_blocks": 16384, 00:04:33.307 "uuid": "2a19b874-3c82-47c3-8ac5-e06457915936", 00:04:33.307 "assigned_rate_limits": { 00:04:33.307 "rw_ios_per_sec": 0, 00:04:33.307 "rw_mbytes_per_sec": 0, 00:04:33.307 "r_mbytes_per_sec": 0, 00:04:33.307 "w_mbytes_per_sec": 0 00:04:33.307 }, 00:04:33.307 "claimed": true, 00:04:33.307 "claim_type": "exclusive_write", 00:04:33.307 "zoned": false, 00:04:33.307 "supported_io_types": { 00:04:33.307 "read": true, 00:04:33.307 "write": true, 00:04:33.307 "unmap": true, 00:04:33.307 "flush": true, 00:04:33.307 "reset": true, 00:04:33.307 "nvme_admin": false, 00:04:33.307 "nvme_io": false, 00:04:33.307 "nvme_io_md": false, 00:04:33.307 "write_zeroes": true, 00:04:33.307 "zcopy": true, 00:04:33.307 "get_zone_info": false, 00:04:33.307 "zone_management": false, 00:04:33.307 "zone_append": false, 00:04:33.307 "compare": false, 00:04:33.307 "compare_and_write": false, 00:04:33.307 "abort": true, 00:04:33.307 "seek_hole": false, 00:04:33.307 "seek_data": false, 00:04:33.307 "copy": true, 00:04:33.307 "nvme_iov_md": false 00:04:33.307 }, 00:04:33.307 "memory_domains": [ 00:04:33.307 { 00:04:33.307 "dma_device_id": "system", 00:04:33.307 "dma_device_type": 1 00:04:33.307 }, 00:04:33.307 { 00:04:33.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.307 "dma_device_type": 2 00:04:33.307 } 00:04:33.307 ], 00:04:33.307 "driver_specific": {} 00:04:33.307 }, 00:04:33.307 { 00:04:33.307 "name": "Passthru0", 00:04:33.307 "aliases": [ 00:04:33.307 "f3ee4ef6-62d3-56c7-8ec2-9f70b01446d0" 00:04:33.307 ], 00:04:33.307 "product_name": "passthru", 00:04:33.307 "block_size": 512, 00:04:33.307 "num_blocks": 16384, 00:04:33.307 "uuid": "f3ee4ef6-62d3-56c7-8ec2-9f70b01446d0", 00:04:33.307 "assigned_rate_limits": { 00:04:33.307 "rw_ios_per_sec": 0, 00:04:33.307 "rw_mbytes_per_sec": 0, 00:04:33.307 "r_mbytes_per_sec": 0, 00:04:33.307 "w_mbytes_per_sec": 0 00:04:33.307 }, 00:04:33.307 "claimed": false, 00:04:33.307 "zoned": false, 00:04:33.307 "supported_io_types": { 00:04:33.307 "read": true, 00:04:33.307 "write": true, 00:04:33.307 "unmap": true, 00:04:33.307 "flush": true, 00:04:33.307 "reset": true, 00:04:33.307 "nvme_admin": false, 00:04:33.307 "nvme_io": false, 00:04:33.307 "nvme_io_md": false, 00:04:33.307 "write_zeroes": true, 00:04:33.307 "zcopy": true, 00:04:33.307 "get_zone_info": false, 00:04:33.307 "zone_management": false, 00:04:33.307 "zone_append": false, 00:04:33.307 "compare": false, 00:04:33.307 "compare_and_write": false, 00:04:33.307 "abort": true, 00:04:33.307 "seek_hole": false, 00:04:33.307 "seek_data": false, 00:04:33.307 "copy": true, 00:04:33.307 "nvme_iov_md": false 00:04:33.307 }, 00:04:33.307 "memory_domains": [ 00:04:33.307 { 00:04:33.307 "dma_device_id": "system", 00:04:33.307 "dma_device_type": 1 00:04:33.307 }, 00:04:33.307 { 00:04:33.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.307 "dma_device_type": 2 00:04:33.307 } 00:04:33.307 ], 00:04:33.307 "driver_specific": { 00:04:33.307 "passthru": { 00:04:33.307 "name": "Passthru0", 00:04:33.307 "base_bdev_name": "Malloc0" 00:04:33.307 } 00:04:33.307 } 00:04:33.307 } 00:04:33.307 ]' 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.307 ************************************ 00:04:33.307 END TEST rpc_integrity 00:04:33.307 ************************************ 00:04:33.307 23:48:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.307 00:04:33.307 real 0m0.248s 00:04:33.307 user 0m0.125s 00:04:33.307 sys 0m0.036s 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.307 23:48:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.307 23:48:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:33.307 23:48:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.307 23:48:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.307 23:48:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.307 ************************************ 00:04:33.307 START TEST rpc_plugins 00:04:33.307 ************************************ 00:04:33.307 23:48:39 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:33.307 23:48:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:33.307 23:48:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.307 23:48:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.566 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.566 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:33.566 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:33.566 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.566 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.566 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.566 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:33.566 { 00:04:33.566 "name": "Malloc1", 00:04:33.566 "aliases": [ 00:04:33.566 "2aa894d4-dfe0-4f3b-b27f-74fedb275d41" 00:04:33.566 ], 00:04:33.566 "product_name": "Malloc disk", 00:04:33.566 "block_size": 4096, 00:04:33.566 "num_blocks": 256, 00:04:33.566 "uuid": "2aa894d4-dfe0-4f3b-b27f-74fedb275d41", 00:04:33.566 "assigned_rate_limits": { 00:04:33.566 "rw_ios_per_sec": 0, 00:04:33.566 "rw_mbytes_per_sec": 0, 00:04:33.566 "r_mbytes_per_sec": 0, 00:04:33.566 "w_mbytes_per_sec": 0 00:04:33.566 }, 00:04:33.566 "claimed": false, 00:04:33.566 "zoned": false, 00:04:33.566 "supported_io_types": { 00:04:33.566 "read": true, 00:04:33.566 "write": true, 00:04:33.566 "unmap": true, 00:04:33.566 "flush": true, 00:04:33.566 "reset": true, 00:04:33.566 "nvme_admin": false, 00:04:33.566 "nvme_io": false, 00:04:33.566 "nvme_io_md": false, 00:04:33.566 "write_zeroes": true, 00:04:33.566 "zcopy": true, 00:04:33.566 "get_zone_info": false, 00:04:33.566 "zone_management": false, 00:04:33.566 "zone_append": false, 00:04:33.567 "compare": false, 00:04:33.567 "compare_and_write": false, 00:04:33.567 "abort": true, 00:04:33.567 "seek_hole": false, 00:04:33.567 "seek_data": false, 00:04:33.567 "copy": true, 00:04:33.567 "nvme_iov_md": false 00:04:33.567 }, 00:04:33.567 "memory_domains": [ 00:04:33.567 { 00:04:33.567 "dma_device_id": "system", 00:04:33.567 "dma_device_type": 1 00:04:33.567 }, 00:04:33.567 { 00:04:33.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.567 "dma_device_type": 2 00:04:33.567 } 00:04:33.567 ], 00:04:33.567 "driver_specific": {} 00:04:33.567 } 00:04:33.567 ]' 00:04:33.567 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:33.567 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:33.567 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:33.567 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.567 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.567 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.567 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:33.567 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.567 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.567 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.567 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:33.567 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:33.567 ************************************ 00:04:33.567 END TEST rpc_plugins 00:04:33.567 ************************************ 00:04:33.567 23:48:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:33.567 00:04:33.567 real 0m0.120s 00:04:33.567 user 0m0.072s 00:04:33.567 sys 0m0.011s 00:04:33.567 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.567 23:48:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.567 23:48:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:33.567 23:48:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.567 23:48:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.567 23:48:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.567 ************************************ 00:04:33.567 START TEST rpc_trace_cmd_test 00:04:33.567 ************************************ 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:33.567 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57173", 00:04:33.567 "tpoint_group_mask": "0x8", 00:04:33.567 "iscsi_conn": { 00:04:33.567 "mask": "0x2", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "scsi": { 00:04:33.567 "mask": "0x4", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "bdev": { 00:04:33.567 "mask": "0x8", 00:04:33.567 "tpoint_mask": "0xffffffffffffffff" 00:04:33.567 }, 00:04:33.567 "nvmf_rdma": { 00:04:33.567 "mask": "0x10", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "nvmf_tcp": { 00:04:33.567 "mask": "0x20", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "ftl": { 00:04:33.567 "mask": "0x40", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "blobfs": { 00:04:33.567 "mask": "0x80", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "dsa": { 00:04:33.567 "mask": "0x200", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "thread": { 00:04:33.567 "mask": "0x400", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "nvme_pcie": { 00:04:33.567 "mask": "0x800", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "iaa": { 00:04:33.567 "mask": "0x1000", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "nvme_tcp": { 00:04:33.567 "mask": "0x2000", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "bdev_nvme": { 00:04:33.567 "mask": "0x4000", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "sock": { 00:04:33.567 "mask": "0x8000", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "blob": { 00:04:33.567 "mask": "0x10000", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "bdev_raid": { 00:04:33.567 "mask": "0x20000", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 }, 00:04:33.567 "scheduler": { 00:04:33.567 "mask": "0x40000", 00:04:33.567 "tpoint_mask": "0x0" 00:04:33.567 } 00:04:33.567 }' 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:33.567 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:33.826 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:33.826 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:33.826 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:33.826 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:33.826 ************************************ 00:04:33.826 END TEST rpc_trace_cmd_test 00:04:33.826 ************************************ 00:04:33.826 23:48:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:33.826 00:04:33.826 real 0m0.175s 00:04:33.826 user 0m0.151s 00:04:33.826 sys 0m0.015s 00:04:33.826 23:48:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.826 23:48:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:33.826 23:48:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:33.826 23:48:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:33.826 23:48:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:33.826 23:48:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.826 23:48:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.826 23:48:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.826 ************************************ 00:04:33.826 START TEST rpc_daemon_integrity 00:04:33.826 ************************************ 00:04:33.826 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:33.826 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.826 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.827 { 00:04:33.827 "name": "Malloc2", 00:04:33.827 "aliases": [ 00:04:33.827 "f82dfcf8-9f06-44f9-a0c0-ba586ea366a2" 00:04:33.827 ], 00:04:33.827 "product_name": "Malloc disk", 00:04:33.827 "block_size": 512, 00:04:33.827 "num_blocks": 16384, 00:04:33.827 "uuid": "f82dfcf8-9f06-44f9-a0c0-ba586ea366a2", 00:04:33.827 "assigned_rate_limits": { 00:04:33.827 "rw_ios_per_sec": 0, 00:04:33.827 "rw_mbytes_per_sec": 0, 00:04:33.827 "r_mbytes_per_sec": 0, 00:04:33.827 "w_mbytes_per_sec": 0 00:04:33.827 }, 00:04:33.827 "claimed": false, 00:04:33.827 "zoned": false, 00:04:33.827 "supported_io_types": { 00:04:33.827 "read": true, 00:04:33.827 "write": true, 00:04:33.827 "unmap": true, 00:04:33.827 "flush": true, 00:04:33.827 "reset": true, 00:04:33.827 "nvme_admin": false, 00:04:33.827 "nvme_io": false, 00:04:33.827 "nvme_io_md": false, 00:04:33.827 "write_zeroes": true, 00:04:33.827 "zcopy": true, 00:04:33.827 "get_zone_info": false, 00:04:33.827 "zone_management": false, 00:04:33.827 "zone_append": false, 00:04:33.827 "compare": false, 00:04:33.827 "compare_and_write": false, 00:04:33.827 "abort": true, 00:04:33.827 "seek_hole": false, 00:04:33.827 "seek_data": false, 00:04:33.827 "copy": true, 00:04:33.827 "nvme_iov_md": false 00:04:33.827 }, 00:04:33.827 "memory_domains": [ 00:04:33.827 { 00:04:33.827 "dma_device_id": "system", 00:04:33.827 "dma_device_type": 1 00:04:33.827 }, 00:04:33.827 { 00:04:33.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.827 "dma_device_type": 2 00:04:33.827 } 00:04:33.827 ], 00:04:33.827 "driver_specific": {} 00:04:33.827 } 00:04:33.827 ]' 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.827 [2024-11-18 23:48:40.506734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:33.827 [2024-11-18 23:48:40.506783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.827 [2024-11-18 23:48:40.506801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:33.827 [2024-11-18 23:48:40.506812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.827 [2024-11-18 23:48:40.508919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.827 [2024-11-18 23:48:40.508955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.827 Passthru0 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.827 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.085 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.085 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.085 { 00:04:34.085 "name": "Malloc2", 00:04:34.085 "aliases": [ 00:04:34.085 "f82dfcf8-9f06-44f9-a0c0-ba586ea366a2" 00:04:34.085 ], 00:04:34.085 "product_name": "Malloc disk", 00:04:34.085 "block_size": 512, 00:04:34.085 "num_blocks": 16384, 00:04:34.085 "uuid": "f82dfcf8-9f06-44f9-a0c0-ba586ea366a2", 00:04:34.085 "assigned_rate_limits": { 00:04:34.085 "rw_ios_per_sec": 0, 00:04:34.085 "rw_mbytes_per_sec": 0, 00:04:34.085 "r_mbytes_per_sec": 0, 00:04:34.085 "w_mbytes_per_sec": 0 00:04:34.085 }, 00:04:34.085 "claimed": true, 00:04:34.085 "claim_type": "exclusive_write", 00:04:34.085 "zoned": false, 00:04:34.085 "supported_io_types": { 00:04:34.085 "read": true, 00:04:34.085 "write": true, 00:04:34.085 "unmap": true, 00:04:34.085 "flush": true, 00:04:34.085 "reset": true, 00:04:34.085 "nvme_admin": false, 00:04:34.085 "nvme_io": false, 00:04:34.085 "nvme_io_md": false, 00:04:34.085 "write_zeroes": true, 00:04:34.085 "zcopy": true, 00:04:34.085 "get_zone_info": false, 00:04:34.085 "zone_management": false, 00:04:34.085 "zone_append": false, 00:04:34.085 "compare": false, 00:04:34.085 "compare_and_write": false, 00:04:34.085 "abort": true, 00:04:34.085 "seek_hole": false, 00:04:34.085 "seek_data": false, 00:04:34.085 "copy": true, 00:04:34.085 "nvme_iov_md": false 00:04:34.085 }, 00:04:34.085 "memory_domains": [ 00:04:34.085 { 00:04:34.085 "dma_device_id": "system", 00:04:34.085 "dma_device_type": 1 00:04:34.085 }, 00:04:34.085 { 00:04:34.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.085 "dma_device_type": 2 00:04:34.085 } 00:04:34.085 ], 00:04:34.085 "driver_specific": {} 00:04:34.085 }, 00:04:34.085 { 00:04:34.085 "name": "Passthru0", 00:04:34.085 "aliases": [ 00:04:34.085 "4cc984b8-200c-5e25-a89d-2fd0626746dd" 00:04:34.085 ], 00:04:34.085 "product_name": "passthru", 00:04:34.085 "block_size": 512, 00:04:34.085 "num_blocks": 16384, 00:04:34.085 "uuid": "4cc984b8-200c-5e25-a89d-2fd0626746dd", 00:04:34.085 "assigned_rate_limits": { 00:04:34.085 "rw_ios_per_sec": 0, 00:04:34.085 "rw_mbytes_per_sec": 0, 00:04:34.085 "r_mbytes_per_sec": 0, 00:04:34.085 "w_mbytes_per_sec": 0 00:04:34.085 }, 00:04:34.085 "claimed": false, 00:04:34.085 "zoned": false, 00:04:34.085 "supported_io_types": { 00:04:34.085 "read": true, 00:04:34.085 "write": true, 00:04:34.085 "unmap": true, 00:04:34.085 "flush": true, 00:04:34.086 "reset": true, 00:04:34.086 "nvme_admin": false, 00:04:34.086 "nvme_io": false, 00:04:34.086 "nvme_io_md": false, 00:04:34.086 "write_zeroes": true, 00:04:34.086 "zcopy": true, 00:04:34.086 "get_zone_info": false, 00:04:34.086 "zone_management": false, 00:04:34.086 "zone_append": false, 00:04:34.086 "compare": false, 00:04:34.086 "compare_and_write": false, 00:04:34.086 "abort": true, 00:04:34.086 "seek_hole": false, 00:04:34.086 "seek_data": false, 00:04:34.086 "copy": true, 00:04:34.086 "nvme_iov_md": false 00:04:34.086 }, 00:04:34.086 "memory_domains": [ 00:04:34.086 { 00:04:34.086 "dma_device_id": "system", 00:04:34.086 "dma_device_type": 1 00:04:34.086 }, 00:04:34.086 { 00:04:34.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.086 "dma_device_type": 2 00:04:34.086 } 00:04:34.086 ], 00:04:34.086 "driver_specific": { 00:04:34.086 "passthru": { 00:04:34.086 "name": "Passthru0", 00:04:34.086 "base_bdev_name": "Malloc2" 00:04:34.086 } 00:04:34.086 } 00:04:34.086 } 00:04:34.086 ]' 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.086 ************************************ 00:04:34.086 END TEST rpc_daemon_integrity 00:04:34.086 ************************************ 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.086 00:04:34.086 real 0m0.254s 00:04:34.086 user 0m0.124s 00:04:34.086 sys 0m0.041s 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.086 23:48:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.086 23:48:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:34.086 23:48:40 rpc -- rpc/rpc.sh@84 -- # killprocess 57173 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 57173 ']' 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@958 -- # kill -0 57173 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57173 00:04:34.086 killing process with pid 57173 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57173' 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@973 -- # kill 57173 00:04:34.086 23:48:40 rpc -- common/autotest_common.sh@978 -- # wait 57173 00:04:35.465 ************************************ 00:04:35.465 END TEST rpc 00:04:35.465 ************************************ 00:04:35.465 00:04:35.465 real 0m3.444s 00:04:35.465 user 0m3.842s 00:04:35.465 sys 0m0.686s 00:04:35.465 23:48:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.465 23:48:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.465 23:48:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.465 23:48:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.465 23:48:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.465 23:48:41 -- common/autotest_common.sh@10 -- # set +x 00:04:35.465 ************************************ 00:04:35.465 START TEST skip_rpc 00:04:35.465 ************************************ 00:04:35.465 23:48:41 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.465 * Looking for test storage... 00:04:35.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.465 23:48:42 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.465 --rc genhtml_branch_coverage=1 00:04:35.465 --rc genhtml_function_coverage=1 00:04:35.465 --rc genhtml_legend=1 00:04:35.465 --rc geninfo_all_blocks=1 00:04:35.465 --rc geninfo_unexecuted_blocks=1 00:04:35.465 00:04:35.465 ' 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.465 --rc genhtml_branch_coverage=1 00:04:35.465 --rc genhtml_function_coverage=1 00:04:35.465 --rc genhtml_legend=1 00:04:35.465 --rc geninfo_all_blocks=1 00:04:35.465 --rc geninfo_unexecuted_blocks=1 00:04:35.465 00:04:35.465 ' 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.465 --rc genhtml_branch_coverage=1 00:04:35.465 --rc genhtml_function_coverage=1 00:04:35.465 --rc genhtml_legend=1 00:04:35.465 --rc geninfo_all_blocks=1 00:04:35.465 --rc geninfo_unexecuted_blocks=1 00:04:35.465 00:04:35.465 ' 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.465 --rc genhtml_branch_coverage=1 00:04:35.465 --rc genhtml_function_coverage=1 00:04:35.465 --rc genhtml_legend=1 00:04:35.465 --rc geninfo_all_blocks=1 00:04:35.465 --rc geninfo_unexecuted_blocks=1 00:04:35.465 00:04:35.465 ' 00:04:35.465 23:48:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.465 23:48:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.465 23:48:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.465 23:48:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.465 ************************************ 00:04:35.465 START TEST skip_rpc 00:04:35.465 ************************************ 00:04:35.465 23:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:35.465 23:48:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57380 00:04:35.465 23:48:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.465 23:48:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:35.465 23:48:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:35.726 [2024-11-18 23:48:42.162137] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:35.726 [2024-11-18 23:48:42.162256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57380 ] 00:04:35.726 [2024-11-18 23:48:42.322514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.987 [2024-11-18 23:48:42.434377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57380 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57380 ']' 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57380 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57380 00:04:41.267 killing process with pid 57380 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57380' 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57380 00:04:41.267 23:48:47 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57380 00:04:41.946 ************************************ 00:04:41.946 END TEST skip_rpc 00:04:41.946 ************************************ 00:04:41.946 00:04:41.946 real 0m6.202s 00:04:41.946 user 0m5.738s 00:04:41.946 sys 0m0.356s 00:04:41.947 23:48:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.947 23:48:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.947 23:48:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:41.947 23:48:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.947 23:48:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.947 23:48:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.947 ************************************ 00:04:41.947 START TEST skip_rpc_with_json 00:04:41.947 ************************************ 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57473 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57473 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57473 ']' 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.947 23:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.947 [2024-11-18 23:48:48.424641] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:41.947 [2024-11-18 23:48:48.424757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57473 ] 00:04:41.947 [2024-11-18 23:48:48.586047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.204 [2024-11-18 23:48:48.683656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.769 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.769 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:42.769 23:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.769 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.769 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.769 [2024-11-18 23:48:49.275536] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.769 request: 00:04:42.769 { 00:04:42.769 "trtype": "tcp", 00:04:42.770 "method": "nvmf_get_transports", 00:04:42.770 "req_id": 1 00:04:42.770 } 00:04:42.770 Got JSON-RPC error response 00:04:42.770 response: 00:04:42.770 { 00:04:42.770 "code": -19, 00:04:42.770 "message": "No such device" 00:04:42.770 } 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.770 [2024-11-18 23:48:49.287644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.770 23:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.770 { 00:04:42.770 "subsystems": [ 00:04:42.770 { 00:04:42.770 "subsystem": "fsdev", 00:04:42.770 "config": [ 00:04:42.770 { 00:04:42.770 "method": "fsdev_set_opts", 00:04:42.770 "params": { 00:04:42.770 "fsdev_io_pool_size": 65535, 00:04:42.770 "fsdev_io_cache_size": 256 00:04:42.770 } 00:04:42.770 } 00:04:42.770 ] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "keyring", 00:04:42.770 "config": [] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "iobuf", 00:04:42.770 "config": [ 00:04:42.770 { 00:04:42.770 "method": "iobuf_set_options", 00:04:42.770 "params": { 00:04:42.770 "small_pool_count": 8192, 00:04:42.770 "large_pool_count": 1024, 00:04:42.770 "small_bufsize": 8192, 00:04:42.770 "large_bufsize": 135168, 00:04:42.770 "enable_numa": false 00:04:42.770 } 00:04:42.770 } 00:04:42.770 ] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "sock", 00:04:42.770 "config": [ 00:04:42.770 { 00:04:42.770 "method": "sock_set_default_impl", 00:04:42.770 "params": { 00:04:42.770 "impl_name": "posix" 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "sock_impl_set_options", 00:04:42.770 "params": { 00:04:42.770 "impl_name": "ssl", 00:04:42.770 "recv_buf_size": 4096, 00:04:42.770 "send_buf_size": 4096, 00:04:42.770 "enable_recv_pipe": true, 00:04:42.770 "enable_quickack": false, 00:04:42.770 "enable_placement_id": 0, 00:04:42.770 "enable_zerocopy_send_server": true, 00:04:42.770 "enable_zerocopy_send_client": false, 00:04:42.770 "zerocopy_threshold": 0, 00:04:42.770 "tls_version": 0, 00:04:42.770 "enable_ktls": false 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "sock_impl_set_options", 00:04:42.770 "params": { 00:04:42.770 "impl_name": "posix", 00:04:42.770 "recv_buf_size": 2097152, 00:04:42.770 "send_buf_size": 2097152, 00:04:42.770 "enable_recv_pipe": true, 00:04:42.770 "enable_quickack": false, 00:04:42.770 "enable_placement_id": 0, 00:04:42.770 "enable_zerocopy_send_server": true, 00:04:42.770 "enable_zerocopy_send_client": false, 00:04:42.770 "zerocopy_threshold": 0, 00:04:42.770 "tls_version": 0, 00:04:42.770 "enable_ktls": false 00:04:42.770 } 00:04:42.770 } 00:04:42.770 ] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "vmd", 00:04:42.770 "config": [] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "accel", 00:04:42.770 "config": [ 00:04:42.770 { 00:04:42.770 "method": "accel_set_options", 00:04:42.770 "params": { 00:04:42.770 "small_cache_size": 128, 00:04:42.770 "large_cache_size": 16, 00:04:42.770 "task_count": 2048, 00:04:42.770 "sequence_count": 2048, 00:04:42.770 "buf_count": 2048 00:04:42.770 } 00:04:42.770 } 00:04:42.770 ] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "bdev", 00:04:42.770 "config": [ 00:04:42.770 { 00:04:42.770 "method": "bdev_set_options", 00:04:42.770 "params": { 00:04:42.770 "bdev_io_pool_size": 65535, 00:04:42.770 "bdev_io_cache_size": 256, 00:04:42.770 "bdev_auto_examine": true, 00:04:42.770 "iobuf_small_cache_size": 128, 00:04:42.770 "iobuf_large_cache_size": 16 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "bdev_raid_set_options", 00:04:42.770 "params": { 00:04:42.770 "process_window_size_kb": 1024, 00:04:42.770 "process_max_bandwidth_mb_sec": 0 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "bdev_iscsi_set_options", 00:04:42.770 "params": { 00:04:42.770 "timeout_sec": 30 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "bdev_nvme_set_options", 00:04:42.770 "params": { 00:04:42.770 "action_on_timeout": "none", 00:04:42.770 "timeout_us": 0, 00:04:42.770 "timeout_admin_us": 0, 00:04:42.770 "keep_alive_timeout_ms": 10000, 00:04:42.770 "arbitration_burst": 0, 00:04:42.770 "low_priority_weight": 0, 00:04:42.770 "medium_priority_weight": 0, 00:04:42.770 "high_priority_weight": 0, 00:04:42.770 "nvme_adminq_poll_period_us": 10000, 00:04:42.770 "nvme_ioq_poll_period_us": 0, 00:04:42.770 "io_queue_requests": 0, 00:04:42.770 "delay_cmd_submit": true, 00:04:42.770 "transport_retry_count": 4, 00:04:42.770 "bdev_retry_count": 3, 00:04:42.770 "transport_ack_timeout": 0, 00:04:42.770 "ctrlr_loss_timeout_sec": 0, 00:04:42.770 "reconnect_delay_sec": 0, 00:04:42.770 "fast_io_fail_timeout_sec": 0, 00:04:42.770 "disable_auto_failback": false, 00:04:42.770 "generate_uuids": false, 00:04:42.770 "transport_tos": 0, 00:04:42.770 "nvme_error_stat": false, 00:04:42.770 "rdma_srq_size": 0, 00:04:42.770 "io_path_stat": false, 00:04:42.770 "allow_accel_sequence": false, 00:04:42.770 "rdma_max_cq_size": 0, 00:04:42.770 "rdma_cm_event_timeout_ms": 0, 00:04:42.770 "dhchap_digests": [ 00:04:42.770 "sha256", 00:04:42.770 "sha384", 00:04:42.770 "sha512" 00:04:42.770 ], 00:04:42.770 "dhchap_dhgroups": [ 00:04:42.770 "null", 00:04:42.770 "ffdhe2048", 00:04:42.770 "ffdhe3072", 00:04:42.770 "ffdhe4096", 00:04:42.770 "ffdhe6144", 00:04:42.770 "ffdhe8192" 00:04:42.770 ] 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "bdev_nvme_set_hotplug", 00:04:42.770 "params": { 00:04:42.770 "period_us": 100000, 00:04:42.770 "enable": false 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "bdev_wait_for_examine" 00:04:42.770 } 00:04:42.770 ] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "scsi", 00:04:42.770 "config": null 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "scheduler", 00:04:42.770 "config": [ 00:04:42.770 { 00:04:42.770 "method": "framework_set_scheduler", 00:04:42.770 "params": { 00:04:42.770 "name": "static" 00:04:42.770 } 00:04:42.770 } 00:04:42.770 ] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "vhost_scsi", 00:04:42.770 "config": [] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "vhost_blk", 00:04:42.770 "config": [] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "ublk", 00:04:42.770 "config": [] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "nbd", 00:04:42.770 "config": [] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "nvmf", 00:04:42.770 "config": [ 00:04:42.770 { 00:04:42.770 "method": "nvmf_set_config", 00:04:42.770 "params": { 00:04:42.770 "discovery_filter": "match_any", 00:04:42.770 "admin_cmd_passthru": { 00:04:42.770 "identify_ctrlr": false 00:04:42.770 }, 00:04:42.770 "dhchap_digests": [ 00:04:42.770 "sha256", 00:04:42.770 "sha384", 00:04:42.770 "sha512" 00:04:42.770 ], 00:04:42.770 "dhchap_dhgroups": [ 00:04:42.770 "null", 00:04:42.770 "ffdhe2048", 00:04:42.770 "ffdhe3072", 00:04:42.770 "ffdhe4096", 00:04:42.770 "ffdhe6144", 00:04:42.770 "ffdhe8192" 00:04:42.770 ] 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "nvmf_set_max_subsystems", 00:04:42.770 "params": { 00:04:42.770 "max_subsystems": 1024 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "nvmf_set_crdt", 00:04:42.770 "params": { 00:04:42.770 "crdt1": 0, 00:04:42.770 "crdt2": 0, 00:04:42.770 "crdt3": 0 00:04:42.770 } 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "method": "nvmf_create_transport", 00:04:42.770 "params": { 00:04:42.770 "trtype": "TCP", 00:04:42.770 "max_queue_depth": 128, 00:04:42.770 "max_io_qpairs_per_ctrlr": 127, 00:04:42.770 "in_capsule_data_size": 4096, 00:04:42.770 "max_io_size": 131072, 00:04:42.770 "io_unit_size": 131072, 00:04:42.770 "max_aq_depth": 128, 00:04:42.770 "num_shared_buffers": 511, 00:04:42.770 "buf_cache_size": 4294967295, 00:04:42.770 "dif_insert_or_strip": false, 00:04:42.770 "zcopy": false, 00:04:42.770 "c2h_success": true, 00:04:42.770 "sock_priority": 0, 00:04:42.770 "abort_timeout_sec": 1, 00:04:42.770 "ack_timeout": 0, 00:04:42.770 "data_wr_pool_size": 0 00:04:42.770 } 00:04:42.770 } 00:04:42.770 ] 00:04:42.770 }, 00:04:42.770 { 00:04:42.770 "subsystem": "iscsi", 00:04:42.770 "config": [ 00:04:42.770 { 00:04:42.770 "method": "iscsi_set_options", 00:04:42.770 "params": { 00:04:42.770 "node_base": "iqn.2016-06.io.spdk", 00:04:42.770 "max_sessions": 128, 00:04:42.770 "max_connections_per_session": 2, 00:04:42.770 "max_queue_depth": 64, 00:04:42.770 "default_time2wait": 2, 00:04:42.770 "default_time2retain": 20, 00:04:42.770 "first_burst_length": 8192, 00:04:42.770 "immediate_data": true, 00:04:42.770 "allow_duplicated_isid": false, 00:04:42.770 "error_recovery_level": 0, 00:04:42.770 "nop_timeout": 60, 00:04:42.770 "nop_in_interval": 30, 00:04:42.770 "disable_chap": false, 00:04:42.770 "require_chap": false, 00:04:42.770 "mutual_chap": false, 00:04:42.770 "chap_group": 0, 00:04:42.771 "max_large_datain_per_connection": 64, 00:04:42.771 "max_r2t_per_connection": 4, 00:04:42.771 "pdu_pool_size": 36864, 00:04:42.771 "immediate_data_pool_size": 16384, 00:04:42.771 "data_out_pool_size": 2048 00:04:42.771 } 00:04:42.771 } 00:04:42.771 ] 00:04:42.771 } 00:04:42.771 ] 00:04:42.771 } 00:04:42.771 23:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:42.771 23:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57473 00:04:42.771 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57473 ']' 00:04:42.771 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57473 00:04:42.771 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:42.771 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.771 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57473 00:04:43.028 killing process with pid 57473 00:04:43.028 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.028 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.028 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57473' 00:04:43.028 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57473 00:04:43.028 23:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57473 00:04:44.402 23:48:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57518 00:04:44.402 23:48:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:44.402 23:48:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57518 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57518 ']' 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57518 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57518 00:04:49.670 killing process with pid 57518 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57518' 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57518 00:04:49.670 23:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57518 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.604 ************************************ 00:04:50.604 END TEST skip_rpc_with_json 00:04:50.604 ************************************ 00:04:50.604 00:04:50.604 real 0m8.811s 00:04:50.604 user 0m8.419s 00:04:50.604 sys 0m0.622s 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.604 23:48:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:50.604 23:48:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.604 23:48:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.604 23:48:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.604 ************************************ 00:04:50.604 START TEST skip_rpc_with_delay 00:04:50.604 ************************************ 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:50.604 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.604 [2024-11-18 23:48:57.273407] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:50.862 ************************************ 00:04:50.862 END TEST skip_rpc_with_delay 00:04:50.862 ************************************ 00:04:50.862 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:50.862 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.862 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.862 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.862 00:04:50.862 real 0m0.119s 00:04:50.862 user 0m0.061s 00:04:50.862 sys 0m0.057s 00:04:50.862 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.862 23:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:50.862 23:48:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:50.863 23:48:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:50.863 23:48:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:50.863 23:48:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.863 23:48:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.863 23:48:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.863 ************************************ 00:04:50.863 START TEST exit_on_failed_rpc_init 00:04:50.863 ************************************ 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57635 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57635 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57635 ']' 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.863 23:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.863 [2024-11-18 23:48:57.454674] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:50.863 [2024-11-18 23:48:57.454793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57635 ] 00:04:51.120 [2024-11-18 23:48:57.606109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.120 [2024-11-18 23:48:57.685471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:51.686 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.686 [2024-11-18 23:48:58.356934] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:51.686 [2024-11-18 23:48:58.357232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57653 ] 00:04:51.944 [2024-11-18 23:48:58.517919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.945 [2024-11-18 23:48:58.615031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.945 [2024-11-18 23:48:58.615250] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:51.945 [2024-11-18 23:48:58.615491] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:51.945 [2024-11-18 23:48:58.615611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57635 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57635 ']' 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57635 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57635 00:04:52.203 killing process with pid 57635 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57635' 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57635 00:04:52.203 23:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57635 00:04:53.590 00:04:53.590 real 0m2.594s 00:04:53.590 user 0m2.914s 00:04:53.590 sys 0m0.393s 00:04:53.590 23:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.590 23:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.590 ************************************ 00:04:53.590 END TEST exit_on_failed_rpc_init 00:04:53.590 ************************************ 00:04:53.590 23:49:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.590 ************************************ 00:04:53.590 END TEST skip_rpc 00:04:53.590 ************************************ 00:04:53.590 00:04:53.590 real 0m18.069s 00:04:53.590 user 0m17.269s 00:04:53.590 sys 0m1.597s 00:04:53.590 23:49:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.590 23:49:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.590 23:49:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:53.590 23:49:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.590 23:49:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.590 23:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:53.590 ************************************ 00:04:53.590 START TEST rpc_client 00:04:53.590 ************************************ 00:04:53.590 23:49:00 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:53.590 * Looking for test storage... 00:04:53.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:53.590 23:49:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.590 23:49:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.590 23:49:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.590 23:49:00 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.590 23:49:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.591 23:49:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.591 23:49:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:53.591 23:49:00 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.591 23:49:00 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.591 --rc genhtml_branch_coverage=1 00:04:53.591 --rc genhtml_function_coverage=1 00:04:53.591 --rc genhtml_legend=1 00:04:53.591 --rc geninfo_all_blocks=1 00:04:53.591 --rc geninfo_unexecuted_blocks=1 00:04:53.591 00:04:53.591 ' 00:04:53.591 23:49:00 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.591 --rc genhtml_branch_coverage=1 00:04:53.591 --rc genhtml_function_coverage=1 00:04:53.591 --rc genhtml_legend=1 00:04:53.591 --rc geninfo_all_blocks=1 00:04:53.591 --rc geninfo_unexecuted_blocks=1 00:04:53.591 00:04:53.591 ' 00:04:53.591 23:49:00 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.591 --rc genhtml_branch_coverage=1 00:04:53.591 --rc genhtml_function_coverage=1 00:04:53.591 --rc genhtml_legend=1 00:04:53.591 --rc geninfo_all_blocks=1 00:04:53.591 --rc geninfo_unexecuted_blocks=1 00:04:53.591 00:04:53.591 ' 00:04:53.591 23:49:00 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.591 --rc genhtml_branch_coverage=1 00:04:53.591 --rc genhtml_function_coverage=1 00:04:53.591 --rc genhtml_legend=1 00:04:53.591 --rc geninfo_all_blocks=1 00:04:53.591 --rc geninfo_unexecuted_blocks=1 00:04:53.591 00:04:53.591 ' 00:04:53.591 23:49:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:53.591 OK 00:04:53.591 23:49:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.591 00:04:53.591 real 0m0.177s 00:04:53.591 user 0m0.111s 00:04:53.591 sys 0m0.072s 00:04:53.591 23:49:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.591 23:49:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:53.591 ************************************ 00:04:53.591 END TEST rpc_client 00:04:53.591 ************************************ 00:04:53.591 23:49:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:53.591 23:49:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.591 23:49:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.591 23:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:53.591 ************************************ 00:04:53.591 START TEST json_config 00:04:53.591 ************************************ 00:04:53.591 23:49:00 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:53.881 23:49:00 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.881 23:49:00 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.881 23:49:00 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.881 23:49:00 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.881 23:49:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.881 23:49:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.881 23:49:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.881 23:49:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.881 23:49:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.881 23:49:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.881 23:49:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.881 23:49:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.881 23:49:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.881 23:49:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.881 23:49:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.881 23:49:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:53.881 23:49:00 json_config -- scripts/common.sh@345 -- # : 1 00:04:53.881 23:49:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.881 23:49:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.881 23:49:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:53.881 23:49:00 json_config -- scripts/common.sh@353 -- # local d=1 00:04:53.881 23:49:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.881 23:49:00 json_config -- scripts/common.sh@355 -- # echo 1 00:04:53.881 23:49:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.881 23:49:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:53.881 23:49:00 json_config -- scripts/common.sh@353 -- # local d=2 00:04:53.881 23:49:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.881 23:49:00 json_config -- scripts/common.sh@355 -- # echo 2 00:04:53.881 23:49:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.881 23:49:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.881 23:49:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.881 23:49:00 json_config -- scripts/common.sh@368 -- # return 0 00:04:53.881 23:49:00 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.881 23:49:00 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.881 --rc genhtml_branch_coverage=1 00:04:53.881 --rc genhtml_function_coverage=1 00:04:53.881 --rc genhtml_legend=1 00:04:53.881 --rc geninfo_all_blocks=1 00:04:53.881 --rc geninfo_unexecuted_blocks=1 00:04:53.881 00:04:53.881 ' 00:04:53.881 23:49:00 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.881 --rc genhtml_branch_coverage=1 00:04:53.881 --rc genhtml_function_coverage=1 00:04:53.881 --rc genhtml_legend=1 00:04:53.881 --rc geninfo_all_blocks=1 00:04:53.881 --rc geninfo_unexecuted_blocks=1 00:04:53.881 00:04:53.881 ' 00:04:53.881 23:49:00 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.881 --rc genhtml_branch_coverage=1 00:04:53.881 --rc genhtml_function_coverage=1 00:04:53.881 --rc genhtml_legend=1 00:04:53.881 --rc geninfo_all_blocks=1 00:04:53.881 --rc geninfo_unexecuted_blocks=1 00:04:53.881 00:04:53.881 ' 00:04:53.881 23:49:00 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.881 --rc genhtml_branch_coverage=1 00:04:53.881 --rc genhtml_function_coverage=1 00:04:53.881 --rc genhtml_legend=1 00:04:53.881 --rc geninfo_all_blocks=1 00:04:53.881 --rc geninfo_unexecuted_blocks=1 00:04:53.881 00:04:53.881 ' 00:04:53.881 23:49:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7461d676-225a-4a53-b691-d62ceecf7cb1 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7461d676-225a-4a53-b691-d62ceecf7cb1 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.881 23:49:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.881 23:49:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.881 23:49:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.881 23:49:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.881 23:49:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.881 23:49:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.881 23:49:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.881 23:49:00 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.881 23:49:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@51 -- # : 0 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.881 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.881 23:49:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.882 23:49:00 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:53.882 23:49:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:53.882 23:49:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:53.882 23:49:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:53.882 23:49:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.882 23:49:00 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:53.882 WARNING: No tests are enabled so not running JSON configuration tests 00:04:53.882 23:49:00 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:53.882 00:04:53.882 real 0m0.141s 00:04:53.882 user 0m0.104s 00:04:53.882 sys 0m0.037s 00:04:53.882 23:49:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.882 23:49:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.882 ************************************ 00:04:53.882 END TEST json_config 00:04:53.882 ************************************ 00:04:53.882 23:49:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:53.882 23:49:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.882 23:49:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.882 23:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:53.882 ************************************ 00:04:53.882 START TEST json_config_extra_key 00:04:53.882 ************************************ 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.882 --rc genhtml_branch_coverage=1 00:04:53.882 --rc genhtml_function_coverage=1 00:04:53.882 --rc genhtml_legend=1 00:04:53.882 --rc geninfo_all_blocks=1 00:04:53.882 --rc geninfo_unexecuted_blocks=1 00:04:53.882 00:04:53.882 ' 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.882 --rc genhtml_branch_coverage=1 00:04:53.882 --rc genhtml_function_coverage=1 00:04:53.882 --rc genhtml_legend=1 00:04:53.882 --rc geninfo_all_blocks=1 00:04:53.882 --rc geninfo_unexecuted_blocks=1 00:04:53.882 00:04:53.882 ' 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.882 --rc genhtml_branch_coverage=1 00:04:53.882 --rc genhtml_function_coverage=1 00:04:53.882 --rc genhtml_legend=1 00:04:53.882 --rc geninfo_all_blocks=1 00:04:53.882 --rc geninfo_unexecuted_blocks=1 00:04:53.882 00:04:53.882 ' 00:04:53.882 23:49:00 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.882 --rc genhtml_branch_coverage=1 00:04:53.882 --rc genhtml_function_coverage=1 00:04:53.882 --rc genhtml_legend=1 00:04:53.882 --rc geninfo_all_blocks=1 00:04:53.882 --rc geninfo_unexecuted_blocks=1 00:04:53.882 00:04:53.882 ' 00:04:53.882 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7461d676-225a-4a53-b691-d62ceecf7cb1 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7461d676-225a-4a53-b691-d62ceecf7cb1 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.882 23:49:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.882 23:49:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.882 23:49:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.882 23:49:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.882 23:49:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:53.882 23:49:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.882 23:49:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.883 23:49:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.883 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.883 23:49:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.883 23:49:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.883 23:49:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.883 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:53.883 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:53.883 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:53.883 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:54.141 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:54.141 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:54.141 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:54.141 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:54.141 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:54.141 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:54.141 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:54.141 INFO: launching applications... 00:04:54.141 23:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57841 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:54.141 Waiting for target to run... 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57841 /var/tmp/spdk_tgt.sock 00:04:54.141 23:49:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57841 ']' 00:04:54.141 23:49:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:54.141 23:49:00 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:54.141 23:49:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.141 23:49:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:54.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:54.141 23:49:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.141 23:49:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:54.141 [2024-11-18 23:49:00.651472] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:54.141 [2024-11-18 23:49:00.651710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57841 ] 00:04:54.399 [2024-11-18 23:49:00.962412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.399 [2024-11-18 23:49:01.052585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.967 23:49:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.967 23:49:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:54.967 23:49:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:54.967 00:04:54.967 23:49:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:54.967 INFO: shutting down applications... 00:04:54.967 23:49:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:54.967 23:49:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:54.967 23:49:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:54.967 23:49:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57841 ]] 00:04:54.967 23:49:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57841 00:04:54.967 23:49:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:54.967 23:49:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.967 23:49:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:04:54.967 23:49:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.534 23:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.534 23:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.534 23:49:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:04:55.534 23:49:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.101 23:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.101 23:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.101 23:49:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:04:56.101 23:49:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.669 23:49:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.669 23:49:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.669 23:49:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:04:56.669 23:49:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.669 23:49:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:56.669 23:49:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.669 SPDK target shutdown done 00:04:56.669 Success 00:04:56.669 23:49:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.669 23:49:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:56.669 00:04:56.669 real 0m2.628s 00:04:56.669 user 0m2.385s 00:04:56.669 sys 0m0.398s 00:04:56.669 23:49:03 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.669 23:49:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.669 ************************************ 00:04:56.669 END TEST json_config_extra_key 00:04:56.669 ************************************ 00:04:56.669 23:49:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.669 23:49:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.669 23:49:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.669 23:49:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.669 ************************************ 00:04:56.669 START TEST alias_rpc 00:04:56.669 ************************************ 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.669 * Looking for test storage... 00:04:56.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.669 23:49:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.669 --rc genhtml_branch_coverage=1 00:04:56.669 --rc genhtml_function_coverage=1 00:04:56.669 --rc genhtml_legend=1 00:04:56.669 --rc geninfo_all_blocks=1 00:04:56.669 --rc geninfo_unexecuted_blocks=1 00:04:56.669 00:04:56.669 ' 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.669 --rc genhtml_branch_coverage=1 00:04:56.669 --rc genhtml_function_coverage=1 00:04:56.669 --rc genhtml_legend=1 00:04:56.669 --rc geninfo_all_blocks=1 00:04:56.669 --rc geninfo_unexecuted_blocks=1 00:04:56.669 00:04:56.669 ' 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.669 --rc genhtml_branch_coverage=1 00:04:56.669 --rc genhtml_function_coverage=1 00:04:56.669 --rc genhtml_legend=1 00:04:56.669 --rc geninfo_all_blocks=1 00:04:56.669 --rc geninfo_unexecuted_blocks=1 00:04:56.669 00:04:56.669 ' 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.669 --rc genhtml_branch_coverage=1 00:04:56.669 --rc genhtml_function_coverage=1 00:04:56.669 --rc genhtml_legend=1 00:04:56.669 --rc geninfo_all_blocks=1 00:04:56.669 --rc geninfo_unexecuted_blocks=1 00:04:56.669 00:04:56.669 ' 00:04:56.669 23:49:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:56.669 23:49:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57928 00:04:56.669 23:49:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57928 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57928 ']' 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.669 23:49:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.669 23:49:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.669 [2024-11-18 23:49:03.323187] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:56.669 [2024-11-18 23:49:03.323433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57928 ] 00:04:56.928 [2024-11-18 23:49:03.468865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.928 [2024-11-18 23:49:03.544196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.497 23:49:04 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.497 23:49:04 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.497 23:49:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:57.756 23:49:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57928 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57928 ']' 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57928 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57928 00:04:57.756 killing process with pid 57928 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57928' 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 57928 00:04:57.756 23:49:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 57928 00:04:59.143 ************************************ 00:04:59.143 END TEST alias_rpc 00:04:59.143 ************************************ 00:04:59.143 00:04:59.143 real 0m2.371s 00:04:59.143 user 0m2.471s 00:04:59.143 sys 0m0.336s 00:04:59.143 23:49:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.143 23:49:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.143 23:49:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:59.143 23:49:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:59.143 23:49:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.143 23:49:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.143 23:49:05 -- common/autotest_common.sh@10 -- # set +x 00:04:59.143 ************************************ 00:04:59.143 START TEST spdkcli_tcp 00:04:59.143 ************************************ 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:59.143 * Looking for test storage... 00:04:59.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.143 23:49:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.143 --rc genhtml_branch_coverage=1 00:04:59.143 --rc genhtml_function_coverage=1 00:04:59.143 --rc genhtml_legend=1 00:04:59.143 --rc geninfo_all_blocks=1 00:04:59.143 --rc geninfo_unexecuted_blocks=1 00:04:59.143 00:04:59.143 ' 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.143 --rc genhtml_branch_coverage=1 00:04:59.143 --rc genhtml_function_coverage=1 00:04:59.143 --rc genhtml_legend=1 00:04:59.143 --rc geninfo_all_blocks=1 00:04:59.143 --rc geninfo_unexecuted_blocks=1 00:04:59.143 00:04:59.143 ' 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.143 --rc genhtml_branch_coverage=1 00:04:59.143 --rc genhtml_function_coverage=1 00:04:59.143 --rc genhtml_legend=1 00:04:59.143 --rc geninfo_all_blocks=1 00:04:59.143 --rc geninfo_unexecuted_blocks=1 00:04:59.143 00:04:59.143 ' 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.143 --rc genhtml_branch_coverage=1 00:04:59.143 --rc genhtml_function_coverage=1 00:04:59.143 --rc genhtml_legend=1 00:04:59.143 --rc geninfo_all_blocks=1 00:04:59.143 --rc geninfo_unexecuted_blocks=1 00:04:59.143 00:04:59.143 ' 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58018 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:59.143 23:49:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58018 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58018 ']' 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.143 23:49:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.143 [2024-11-18 23:49:05.733901] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:59.144 [2024-11-18 23:49:05.734174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58018 ] 00:04:59.405 [2024-11-18 23:49:05.889566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.405 [2024-11-18 23:49:05.968069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.405 [2024-11-18 23:49:05.968091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.977 23:49:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.977 23:49:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:59.977 23:49:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58035 00:04:59.977 23:49:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:59.977 23:49:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:00.239 [ 00:05:00.239 "bdev_malloc_delete", 00:05:00.239 "bdev_malloc_create", 00:05:00.239 "bdev_null_resize", 00:05:00.239 "bdev_null_delete", 00:05:00.239 "bdev_null_create", 00:05:00.239 "bdev_nvme_cuse_unregister", 00:05:00.239 "bdev_nvme_cuse_register", 00:05:00.239 "bdev_opal_new_user", 00:05:00.239 "bdev_opal_set_lock_state", 00:05:00.239 "bdev_opal_delete", 00:05:00.239 "bdev_opal_get_info", 00:05:00.239 "bdev_opal_create", 00:05:00.239 "bdev_nvme_opal_revert", 00:05:00.239 "bdev_nvme_opal_init", 00:05:00.239 "bdev_nvme_send_cmd", 00:05:00.239 "bdev_nvme_set_keys", 00:05:00.239 "bdev_nvme_get_path_iostat", 00:05:00.239 "bdev_nvme_get_mdns_discovery_info", 00:05:00.239 "bdev_nvme_stop_mdns_discovery", 00:05:00.239 "bdev_nvme_start_mdns_discovery", 00:05:00.239 "bdev_nvme_set_multipath_policy", 00:05:00.239 "bdev_nvme_set_preferred_path", 00:05:00.239 "bdev_nvme_get_io_paths", 00:05:00.239 "bdev_nvme_remove_error_injection", 00:05:00.239 "bdev_nvme_add_error_injection", 00:05:00.239 "bdev_nvme_get_discovery_info", 00:05:00.239 "bdev_nvme_stop_discovery", 00:05:00.239 "bdev_nvme_start_discovery", 00:05:00.239 "bdev_nvme_get_controller_health_info", 00:05:00.239 "bdev_nvme_disable_controller", 00:05:00.239 "bdev_nvme_enable_controller", 00:05:00.239 "bdev_nvme_reset_controller", 00:05:00.239 "bdev_nvme_get_transport_statistics", 00:05:00.239 "bdev_nvme_apply_firmware", 00:05:00.239 "bdev_nvme_detach_controller", 00:05:00.239 "bdev_nvme_get_controllers", 00:05:00.239 "bdev_nvme_attach_controller", 00:05:00.239 "bdev_nvme_set_hotplug", 00:05:00.239 "bdev_nvme_set_options", 00:05:00.239 "bdev_passthru_delete", 00:05:00.239 "bdev_passthru_create", 00:05:00.239 "bdev_lvol_set_parent_bdev", 00:05:00.239 "bdev_lvol_set_parent", 00:05:00.239 "bdev_lvol_check_shallow_copy", 00:05:00.239 "bdev_lvol_start_shallow_copy", 00:05:00.239 "bdev_lvol_grow_lvstore", 00:05:00.239 "bdev_lvol_get_lvols", 00:05:00.239 "bdev_lvol_get_lvstores", 00:05:00.239 "bdev_lvol_delete", 00:05:00.239 "bdev_lvol_set_read_only", 00:05:00.239 "bdev_lvol_resize", 00:05:00.239 "bdev_lvol_decouple_parent", 00:05:00.239 "bdev_lvol_inflate", 00:05:00.239 "bdev_lvol_rename", 00:05:00.239 "bdev_lvol_clone_bdev", 00:05:00.239 "bdev_lvol_clone", 00:05:00.239 "bdev_lvol_snapshot", 00:05:00.239 "bdev_lvol_create", 00:05:00.239 "bdev_lvol_delete_lvstore", 00:05:00.239 "bdev_lvol_rename_lvstore", 00:05:00.239 "bdev_lvol_create_lvstore", 00:05:00.239 "bdev_raid_set_options", 00:05:00.239 "bdev_raid_remove_base_bdev", 00:05:00.239 "bdev_raid_add_base_bdev", 00:05:00.239 "bdev_raid_delete", 00:05:00.239 "bdev_raid_create", 00:05:00.239 "bdev_raid_get_bdevs", 00:05:00.239 "bdev_error_inject_error", 00:05:00.239 "bdev_error_delete", 00:05:00.239 "bdev_error_create", 00:05:00.239 "bdev_split_delete", 00:05:00.239 "bdev_split_create", 00:05:00.239 "bdev_delay_delete", 00:05:00.239 "bdev_delay_create", 00:05:00.240 "bdev_delay_update_latency", 00:05:00.240 "bdev_zone_block_delete", 00:05:00.240 "bdev_zone_block_create", 00:05:00.240 "blobfs_create", 00:05:00.240 "blobfs_detect", 00:05:00.240 "blobfs_set_cache_size", 00:05:00.240 "bdev_xnvme_delete", 00:05:00.240 "bdev_xnvme_create", 00:05:00.240 "bdev_aio_delete", 00:05:00.240 "bdev_aio_rescan", 00:05:00.240 "bdev_aio_create", 00:05:00.240 "bdev_ftl_set_property", 00:05:00.240 "bdev_ftl_get_properties", 00:05:00.240 "bdev_ftl_get_stats", 00:05:00.240 "bdev_ftl_unmap", 00:05:00.240 "bdev_ftl_unload", 00:05:00.240 "bdev_ftl_delete", 00:05:00.240 "bdev_ftl_load", 00:05:00.240 "bdev_ftl_create", 00:05:00.240 "bdev_virtio_attach_controller", 00:05:00.240 "bdev_virtio_scsi_get_devices", 00:05:00.240 "bdev_virtio_detach_controller", 00:05:00.240 "bdev_virtio_blk_set_hotplug", 00:05:00.240 "bdev_iscsi_delete", 00:05:00.240 "bdev_iscsi_create", 00:05:00.240 "bdev_iscsi_set_options", 00:05:00.240 "accel_error_inject_error", 00:05:00.240 "ioat_scan_accel_module", 00:05:00.240 "dsa_scan_accel_module", 00:05:00.240 "iaa_scan_accel_module", 00:05:00.240 "keyring_file_remove_key", 00:05:00.240 "keyring_file_add_key", 00:05:00.240 "keyring_linux_set_options", 00:05:00.240 "fsdev_aio_delete", 00:05:00.240 "fsdev_aio_create", 00:05:00.240 "iscsi_get_histogram", 00:05:00.240 "iscsi_enable_histogram", 00:05:00.240 "iscsi_set_options", 00:05:00.240 "iscsi_get_auth_groups", 00:05:00.240 "iscsi_auth_group_remove_secret", 00:05:00.240 "iscsi_auth_group_add_secret", 00:05:00.240 "iscsi_delete_auth_group", 00:05:00.240 "iscsi_create_auth_group", 00:05:00.240 "iscsi_set_discovery_auth", 00:05:00.240 "iscsi_get_options", 00:05:00.240 "iscsi_target_node_request_logout", 00:05:00.240 "iscsi_target_node_set_redirect", 00:05:00.240 "iscsi_target_node_set_auth", 00:05:00.240 "iscsi_target_node_add_lun", 00:05:00.240 "iscsi_get_stats", 00:05:00.240 "iscsi_get_connections", 00:05:00.240 "iscsi_portal_group_set_auth", 00:05:00.240 "iscsi_start_portal_group", 00:05:00.240 "iscsi_delete_portal_group", 00:05:00.240 "iscsi_create_portal_group", 00:05:00.240 "iscsi_get_portal_groups", 00:05:00.240 "iscsi_delete_target_node", 00:05:00.240 "iscsi_target_node_remove_pg_ig_maps", 00:05:00.240 "iscsi_target_node_add_pg_ig_maps", 00:05:00.240 "iscsi_create_target_node", 00:05:00.240 "iscsi_get_target_nodes", 00:05:00.240 "iscsi_delete_initiator_group", 00:05:00.240 "iscsi_initiator_group_remove_initiators", 00:05:00.240 "iscsi_initiator_group_add_initiators", 00:05:00.240 "iscsi_create_initiator_group", 00:05:00.240 "iscsi_get_initiator_groups", 00:05:00.240 "nvmf_set_crdt", 00:05:00.240 "nvmf_set_config", 00:05:00.240 "nvmf_set_max_subsystems", 00:05:00.240 "nvmf_stop_mdns_prr", 00:05:00.240 "nvmf_publish_mdns_prr", 00:05:00.240 "nvmf_subsystem_get_listeners", 00:05:00.240 "nvmf_subsystem_get_qpairs", 00:05:00.240 "nvmf_subsystem_get_controllers", 00:05:00.240 "nvmf_get_stats", 00:05:00.240 "nvmf_get_transports", 00:05:00.240 "nvmf_create_transport", 00:05:00.240 "nvmf_get_targets", 00:05:00.240 "nvmf_delete_target", 00:05:00.240 "nvmf_create_target", 00:05:00.240 "nvmf_subsystem_allow_any_host", 00:05:00.240 "nvmf_subsystem_set_keys", 00:05:00.240 "nvmf_subsystem_remove_host", 00:05:00.240 "nvmf_subsystem_add_host", 00:05:00.240 "nvmf_ns_remove_host", 00:05:00.240 "nvmf_ns_add_host", 00:05:00.240 "nvmf_subsystem_remove_ns", 00:05:00.240 "nvmf_subsystem_set_ns_ana_group", 00:05:00.240 "nvmf_subsystem_add_ns", 00:05:00.240 "nvmf_subsystem_listener_set_ana_state", 00:05:00.240 "nvmf_discovery_get_referrals", 00:05:00.240 "nvmf_discovery_remove_referral", 00:05:00.240 "nvmf_discovery_add_referral", 00:05:00.240 "nvmf_subsystem_remove_listener", 00:05:00.240 "nvmf_subsystem_add_listener", 00:05:00.240 "nvmf_delete_subsystem", 00:05:00.240 "nvmf_create_subsystem", 00:05:00.240 "nvmf_get_subsystems", 00:05:00.240 "env_dpdk_get_mem_stats", 00:05:00.240 "nbd_get_disks", 00:05:00.240 "nbd_stop_disk", 00:05:00.240 "nbd_start_disk", 00:05:00.240 "ublk_recover_disk", 00:05:00.240 "ublk_get_disks", 00:05:00.240 "ublk_stop_disk", 00:05:00.240 "ublk_start_disk", 00:05:00.240 "ublk_destroy_target", 00:05:00.240 "ublk_create_target", 00:05:00.240 "virtio_blk_create_transport", 00:05:00.240 "virtio_blk_get_transports", 00:05:00.240 "vhost_controller_set_coalescing", 00:05:00.240 "vhost_get_controllers", 00:05:00.240 "vhost_delete_controller", 00:05:00.240 "vhost_create_blk_controller", 00:05:00.240 "vhost_scsi_controller_remove_target", 00:05:00.240 "vhost_scsi_controller_add_target", 00:05:00.240 "vhost_start_scsi_controller", 00:05:00.240 "vhost_create_scsi_controller", 00:05:00.240 "thread_set_cpumask", 00:05:00.240 "scheduler_set_options", 00:05:00.240 "framework_get_governor", 00:05:00.240 "framework_get_scheduler", 00:05:00.240 "framework_set_scheduler", 00:05:00.240 "framework_get_reactors", 00:05:00.240 "thread_get_io_channels", 00:05:00.240 "thread_get_pollers", 00:05:00.240 "thread_get_stats", 00:05:00.240 "framework_monitor_context_switch", 00:05:00.240 "spdk_kill_instance", 00:05:00.240 "log_enable_timestamps", 00:05:00.240 "log_get_flags", 00:05:00.240 "log_clear_flag", 00:05:00.240 "log_set_flag", 00:05:00.240 "log_get_level", 00:05:00.240 "log_set_level", 00:05:00.240 "log_get_print_level", 00:05:00.240 "log_set_print_level", 00:05:00.240 "framework_enable_cpumask_locks", 00:05:00.240 "framework_disable_cpumask_locks", 00:05:00.240 "framework_wait_init", 00:05:00.240 "framework_start_init", 00:05:00.240 "scsi_get_devices", 00:05:00.240 "bdev_get_histogram", 00:05:00.240 "bdev_enable_histogram", 00:05:00.240 "bdev_set_qos_limit", 00:05:00.240 "bdev_set_qd_sampling_period", 00:05:00.240 "bdev_get_bdevs", 00:05:00.240 "bdev_reset_iostat", 00:05:00.240 "bdev_get_iostat", 00:05:00.240 "bdev_examine", 00:05:00.240 "bdev_wait_for_examine", 00:05:00.240 "bdev_set_options", 00:05:00.240 "accel_get_stats", 00:05:00.240 "accel_set_options", 00:05:00.240 "accel_set_driver", 00:05:00.240 "accel_crypto_key_destroy", 00:05:00.240 "accel_crypto_keys_get", 00:05:00.240 "accel_crypto_key_create", 00:05:00.240 "accel_assign_opc", 00:05:00.240 "accel_get_module_info", 00:05:00.240 "accel_get_opc_assignments", 00:05:00.240 "vmd_rescan", 00:05:00.240 "vmd_remove_device", 00:05:00.240 "vmd_enable", 00:05:00.240 "sock_get_default_impl", 00:05:00.240 "sock_set_default_impl", 00:05:00.240 "sock_impl_set_options", 00:05:00.240 "sock_impl_get_options", 00:05:00.240 "iobuf_get_stats", 00:05:00.240 "iobuf_set_options", 00:05:00.240 "keyring_get_keys", 00:05:00.240 "framework_get_pci_devices", 00:05:00.240 "framework_get_config", 00:05:00.240 "framework_get_subsystems", 00:05:00.240 "fsdev_set_opts", 00:05:00.240 "fsdev_get_opts", 00:05:00.240 "trace_get_info", 00:05:00.240 "trace_get_tpoint_group_mask", 00:05:00.240 "trace_disable_tpoint_group", 00:05:00.240 "trace_enable_tpoint_group", 00:05:00.240 "trace_clear_tpoint_mask", 00:05:00.240 "trace_set_tpoint_mask", 00:05:00.240 "notify_get_notifications", 00:05:00.240 "notify_get_types", 00:05:00.240 "spdk_get_version", 00:05:00.240 "rpc_get_methods" 00:05:00.240 ] 00:05:00.240 23:49:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 23:49:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.240 23:49:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58018 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58018 ']' 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58018 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58018 00:05:00.240 killing process with pid 58018 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58018' 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58018 00:05:00.240 23:49:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58018 00:05:01.626 ************************************ 00:05:01.626 END TEST spdkcli_tcp 00:05:01.626 ************************************ 00:05:01.626 00:05:01.626 real 0m2.460s 00:05:01.626 user 0m4.424s 00:05:01.626 sys 0m0.406s 00:05:01.626 23:49:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.626 23:49:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.626 23:49:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.626 23:49:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.626 23:49:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.626 23:49:08 -- common/autotest_common.sh@10 -- # set +x 00:05:01.626 ************************************ 00:05:01.626 START TEST dpdk_mem_utility 00:05:01.626 ************************************ 00:05:01.626 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.626 * Looking for test storage... 00:05:01.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:01.626 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.626 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.626 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.626 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.626 23:49:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:01.627 23:49:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:01.627 23:49:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.627 23:49:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:01.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.627 23:49:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.627 23:49:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.627 23:49:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.627 23:49:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.627 --rc genhtml_branch_coverage=1 00:05:01.627 --rc genhtml_function_coverage=1 00:05:01.627 --rc genhtml_legend=1 00:05:01.627 --rc geninfo_all_blocks=1 00:05:01.627 --rc geninfo_unexecuted_blocks=1 00:05:01.627 00:05:01.627 ' 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.627 --rc genhtml_branch_coverage=1 00:05:01.627 --rc genhtml_function_coverage=1 00:05:01.627 --rc genhtml_legend=1 00:05:01.627 --rc geninfo_all_blocks=1 00:05:01.627 --rc geninfo_unexecuted_blocks=1 00:05:01.627 00:05:01.627 ' 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.627 --rc genhtml_branch_coverage=1 00:05:01.627 --rc genhtml_function_coverage=1 00:05:01.627 --rc genhtml_legend=1 00:05:01.627 --rc geninfo_all_blocks=1 00:05:01.627 --rc geninfo_unexecuted_blocks=1 00:05:01.627 00:05:01.627 ' 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.627 --rc genhtml_branch_coverage=1 00:05:01.627 --rc genhtml_function_coverage=1 00:05:01.627 --rc genhtml_legend=1 00:05:01.627 --rc geninfo_all_blocks=1 00:05:01.627 --rc geninfo_unexecuted_blocks=1 00:05:01.627 00:05:01.627 ' 00:05:01.627 23:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:01.627 23:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58124 00:05:01.627 23:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58124 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58124 ']' 00:05:01.627 23:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.627 23:49:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.627 [2024-11-18 23:49:08.215308] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:01.627 [2024-11-18 23:49:08.215420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58124 ] 00:05:01.888 [2024-11-18 23:49:08.374177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.888 [2024-11-18 23:49:08.470619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.459 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.459 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:02.459 23:49:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:02.459 23:49:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:02.459 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.459 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.459 { 00:05:02.459 "filename": "/tmp/spdk_mem_dump.txt" 00:05:02.459 } 00:05:02.459 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.459 23:49:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:02.459 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:02.459 1 heaps totaling size 816.000000 MiB 00:05:02.459 size: 816.000000 MiB heap id: 0 00:05:02.459 end heaps---------- 00:05:02.459 9 mempools totaling size 595.772034 MiB 00:05:02.459 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:02.459 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:02.459 size: 92.545471 MiB name: bdev_io_58124 00:05:02.459 size: 50.003479 MiB name: msgpool_58124 00:05:02.459 size: 36.509338 MiB name: fsdev_io_58124 00:05:02.459 size: 21.763794 MiB name: PDU_Pool 00:05:02.459 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:02.459 size: 4.133484 MiB name: evtpool_58124 00:05:02.459 size: 0.026123 MiB name: Session_Pool 00:05:02.459 end mempools------- 00:05:02.459 6 memzones totaling size 4.142822 MiB 00:05:02.459 size: 1.000366 MiB name: RG_ring_0_58124 00:05:02.459 size: 1.000366 MiB name: RG_ring_1_58124 00:05:02.459 size: 1.000366 MiB name: RG_ring_4_58124 00:05:02.459 size: 1.000366 MiB name: RG_ring_5_58124 00:05:02.459 size: 0.125366 MiB name: RG_ring_2_58124 00:05:02.459 size: 0.015991 MiB name: RG_ring_3_58124 00:05:02.459 end memzones------- 00:05:02.459 23:49:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:02.728 heap id: 0 total size: 816.000000 MiB number of busy elements: 317 number of free elements: 18 00:05:02.728 list of free elements. size: 16.790894 MiB 00:05:02.728 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:02.728 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:02.728 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:02.728 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:02.728 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:02.728 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:02.728 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:02.728 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:02.728 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:02.728 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:02.728 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:02.728 element at address: 0x20001ac00000 with size: 0.558777 MiB 00:05:02.728 element at address: 0x200000c00000 with size: 0.491638 MiB 00:05:02.728 element at address: 0x200018e00000 with size: 0.488464 MiB 00:05:02.728 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:02.728 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:02.728 element at address: 0x200028000000 with size: 0.391174 MiB 00:05:02.728 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:02.728 list of standard malloc elements. size: 199.288208 MiB 00:05:02.728 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:02.728 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:02.728 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:02.728 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:02.728 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:02.728 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:02.728 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:02.728 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:02.728 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:02.728 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:02.728 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:02.728 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:02.728 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:02.729 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:02.730 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:02.730 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:02.730 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:02.731 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:02.731 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200028064240 with size: 0.000244 MiB 00:05:02.731 element at address: 0x200028064340 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806b000 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:02.731 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:02.732 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:02.732 list of memzone associated elements. size: 599.920898 MiB 00:05:02.732 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:02.732 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:02.732 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:02.732 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:02.732 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:02.732 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58124_0 00:05:02.732 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:02.732 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58124_0 00:05:02.732 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:02.732 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58124_0 00:05:02.732 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:02.732 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:02.732 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:02.732 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:02.732 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:02.732 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58124_0 00:05:02.732 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:02.732 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58124 00:05:02.732 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:02.732 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58124 00:05:02.732 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:02.732 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:02.732 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:02.732 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:02.732 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:02.732 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:02.732 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:02.732 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:02.732 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:02.732 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58124 00:05:02.732 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:02.732 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58124 00:05:02.732 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:02.732 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58124 00:05:02.732 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:02.732 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58124 00:05:02.732 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:02.732 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58124 00:05:02.732 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:02.732 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58124 00:05:02.732 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:02.732 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:02.732 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:02.732 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:02.732 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:02.732 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:02.732 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:02.732 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58124 00:05:02.732 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:02.732 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58124 00:05:02.732 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:02.732 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:02.732 element at address: 0x200028064440 with size: 0.023804 MiB 00:05:02.732 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:02.732 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:02.732 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58124 00:05:02.732 element at address: 0x20002806a5c0 with size: 0.002502 MiB 00:05:02.732 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:02.732 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:02.732 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58124 00:05:02.732 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:02.732 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58124 00:05:02.732 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:02.732 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58124 00:05:02.732 element at address: 0x20002806b100 with size: 0.000366 MiB 00:05:02.732 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:02.732 23:49:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:02.732 23:49:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58124 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58124 ']' 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58124 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58124 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58124' 00:05:02.732 killing process with pid 58124 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58124 00:05:02.732 23:49:09 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58124 00:05:04.115 00:05:04.115 real 0m2.614s 00:05:04.115 user 0m2.635s 00:05:04.115 sys 0m0.382s 00:05:04.115 23:49:10 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.115 23:49:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.115 ************************************ 00:05:04.115 END TEST dpdk_mem_utility 00:05:04.115 ************************************ 00:05:04.115 23:49:10 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:04.115 23:49:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.115 23:49:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.115 23:49:10 -- common/autotest_common.sh@10 -- # set +x 00:05:04.115 ************************************ 00:05:04.115 START TEST event 00:05:04.115 ************************************ 00:05:04.115 23:49:10 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:04.115 * Looking for test storage... 00:05:04.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:04.115 23:49:10 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.115 23:49:10 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.115 23:49:10 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.115 23:49:10 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.115 23:49:10 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.115 23:49:10 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.115 23:49:10 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.115 23:49:10 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.115 23:49:10 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.115 23:49:10 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.115 23:49:10 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.115 23:49:10 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.115 23:49:10 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.115 23:49:10 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.115 23:49:10 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.115 23:49:10 event -- scripts/common.sh@344 -- # case "$op" in 00:05:04.115 23:49:10 event -- scripts/common.sh@345 -- # : 1 00:05:04.115 23:49:10 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.115 23:49:10 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.115 23:49:10 event -- scripts/common.sh@365 -- # decimal 1 00:05:04.115 23:49:10 event -- scripts/common.sh@353 -- # local d=1 00:05:04.115 23:49:10 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.115 23:49:10 event -- scripts/common.sh@355 -- # echo 1 00:05:04.115 23:49:10 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.467 23:49:10 event -- scripts/common.sh@366 -- # decimal 2 00:05:04.467 23:49:10 event -- scripts/common.sh@353 -- # local d=2 00:05:04.467 23:49:10 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.467 23:49:10 event -- scripts/common.sh@355 -- # echo 2 00:05:04.467 23:49:10 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.467 23:49:10 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.467 23:49:10 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.467 23:49:10 event -- scripts/common.sh@368 -- # return 0 00:05:04.467 23:49:10 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.467 23:49:10 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.467 --rc genhtml_branch_coverage=1 00:05:04.467 --rc genhtml_function_coverage=1 00:05:04.467 --rc genhtml_legend=1 00:05:04.467 --rc geninfo_all_blocks=1 00:05:04.467 --rc geninfo_unexecuted_blocks=1 00:05:04.467 00:05:04.467 ' 00:05:04.468 23:49:10 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.468 --rc genhtml_branch_coverage=1 00:05:04.468 --rc genhtml_function_coverage=1 00:05:04.468 --rc genhtml_legend=1 00:05:04.468 --rc geninfo_all_blocks=1 00:05:04.468 --rc geninfo_unexecuted_blocks=1 00:05:04.468 00:05:04.468 ' 00:05:04.468 23:49:10 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.468 --rc genhtml_branch_coverage=1 00:05:04.468 --rc genhtml_function_coverage=1 00:05:04.468 --rc genhtml_legend=1 00:05:04.468 --rc geninfo_all_blocks=1 00:05:04.468 --rc geninfo_unexecuted_blocks=1 00:05:04.468 00:05:04.468 ' 00:05:04.468 23:49:10 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.468 --rc genhtml_branch_coverage=1 00:05:04.468 --rc genhtml_function_coverage=1 00:05:04.468 --rc genhtml_legend=1 00:05:04.468 --rc geninfo_all_blocks=1 00:05:04.468 --rc geninfo_unexecuted_blocks=1 00:05:04.468 00:05:04.468 ' 00:05:04.468 23:49:10 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:04.468 23:49:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.468 23:49:10 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.468 23:49:10 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:04.468 23:49:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.468 23:49:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.468 ************************************ 00:05:04.468 START TEST event_perf 00:05:04.468 ************************************ 00:05:04.468 23:49:10 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.468 Running I/O for 1 seconds...[2024-11-18 23:49:10.849807] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:04.468 [2024-11-18 23:49:10.849974] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58215 ] 00:05:04.468 [2024-11-18 23:49:11.014190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.726 [2024-11-18 23:49:11.119647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.726 [2024-11-18 23:49:11.119800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.726 [2024-11-18 23:49:11.120035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.726 Running I/O for 1 seconds...[2024-11-18 23:49:11.120045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.659 00:05:05.659 lcore 0: 158748 00:05:05.659 lcore 1: 158743 00:05:05.659 lcore 2: 158744 00:05:05.659 lcore 3: 158745 00:05:05.659 done. 00:05:05.659 ************************************ 00:05:05.659 END TEST event_perf 00:05:05.659 ************************************ 00:05:05.659 00:05:05.659 real 0m1.462s 00:05:05.659 user 0m4.265s 00:05:05.659 sys 0m0.076s 00:05:05.659 23:49:12 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.659 23:49:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.659 23:49:12 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.659 23:49:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.659 23:49:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.659 23:49:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.659 ************************************ 00:05:05.659 START TEST event_reactor 00:05:05.659 ************************************ 00:05:05.659 23:49:12 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.918 [2024-11-18 23:49:12.356082] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:05.918 [2024-11-18 23:49:12.356203] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58255 ] 00:05:05.918 [2024-11-18 23:49:12.506099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.918 [2024-11-18 23:49:12.600461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.295 test_start 00:05:07.295 oneshot 00:05:07.295 tick 100 00:05:07.295 tick 100 00:05:07.295 tick 250 00:05:07.295 tick 100 00:05:07.295 tick 100 00:05:07.295 tick 100 00:05:07.295 tick 250 00:05:07.295 tick 500 00:05:07.295 tick 100 00:05:07.295 tick 100 00:05:07.295 tick 250 00:05:07.295 tick 100 00:05:07.295 tick 100 00:05:07.295 test_end 00:05:07.295 00:05:07.295 real 0m1.395s 00:05:07.295 user 0m1.221s 00:05:07.295 sys 0m0.067s 00:05:07.295 23:49:13 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.295 23:49:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:07.295 ************************************ 00:05:07.295 END TEST event_reactor 00:05:07.295 ************************************ 00:05:07.295 23:49:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.295 23:49:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:07.295 23:49:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.295 23:49:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.295 ************************************ 00:05:07.295 START TEST event_reactor_perf 00:05:07.295 ************************************ 00:05:07.295 23:49:13 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.295 [2024-11-18 23:49:13.792481] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:07.295 [2024-11-18 23:49:13.792596] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58291 ] 00:05:07.295 [2024-11-18 23:49:13.951747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.554 [2024-11-18 23:49:14.043402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.488 test_start 00:05:08.488 test_end 00:05:08.488 Performance: 337932 events per second 00:05:08.488 00:05:08.488 real 0m1.401s 00:05:08.488 user 0m1.225s 00:05:08.488 sys 0m0.068s 00:05:08.489 ************************************ 00:05:08.489 END TEST event_reactor_perf 00:05:08.489 ************************************ 00:05:08.489 23:49:15 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.489 23:49:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.747 23:49:15 event -- event/event.sh@49 -- # uname -s 00:05:08.747 23:49:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.747 23:49:15 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.747 23:49:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.747 23:49:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.747 23:49:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.747 ************************************ 00:05:08.747 START TEST event_scheduler 00:05:08.747 ************************************ 00:05:08.747 23:49:15 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.747 * Looking for test storage... 00:05:08.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:08.747 23:49:15 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.747 23:49:15 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.748 23:49:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.748 --rc genhtml_branch_coverage=1 00:05:08.748 --rc genhtml_function_coverage=1 00:05:08.748 --rc genhtml_legend=1 00:05:08.748 --rc geninfo_all_blocks=1 00:05:08.748 --rc geninfo_unexecuted_blocks=1 00:05:08.748 00:05:08.748 ' 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.748 --rc genhtml_branch_coverage=1 00:05:08.748 --rc genhtml_function_coverage=1 00:05:08.748 --rc genhtml_legend=1 00:05:08.748 --rc geninfo_all_blocks=1 00:05:08.748 --rc geninfo_unexecuted_blocks=1 00:05:08.748 00:05:08.748 ' 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.748 --rc genhtml_branch_coverage=1 00:05:08.748 --rc genhtml_function_coverage=1 00:05:08.748 --rc genhtml_legend=1 00:05:08.748 --rc geninfo_all_blocks=1 00:05:08.748 --rc geninfo_unexecuted_blocks=1 00:05:08.748 00:05:08.748 ' 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.748 --rc genhtml_branch_coverage=1 00:05:08.748 --rc genhtml_function_coverage=1 00:05:08.748 --rc genhtml_legend=1 00:05:08.748 --rc geninfo_all_blocks=1 00:05:08.748 --rc geninfo_unexecuted_blocks=1 00:05:08.748 00:05:08.748 ' 00:05:08.748 23:49:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.748 23:49:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58362 00:05:08.748 23:49:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.748 23:49:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58362 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58362 ']' 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.748 23:49:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.748 23:49:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.748 [2024-11-18 23:49:15.420223] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:08.748 [2024-11-18 23:49:15.420647] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58362 ] 00:05:09.007 [2024-11-18 23:49:15.574884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.007 [2024-11-18 23:49:15.673174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.007 [2024-11-18 23:49:15.673385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.007 [2024-11-18 23:49:15.673563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.007 [2024-11-18 23:49:15.673567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.576 23:49:16 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.576 23:49:16 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:09.576 23:49:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.576 23:49:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.576 23:49:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.834 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.834 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.834 POWER: Cannot set governor of lcore 0 to performance 00:05:09.834 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.834 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.834 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.834 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.834 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:09.834 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:09.834 POWER: Unable to set Power Management Environment for lcore 0 00:05:09.834 [2024-11-18 23:49:16.267456] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:09.834 [2024-11-18 23:49:16.267492] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:09.834 [2024-11-18 23:49:16.267514] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.834 [2024-11-18 23:49:16.267574] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.834 [2024-11-18 23:49:16.267606] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.834 [2024-11-18 23:49:16.267627] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.834 23:49:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.834 23:49:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.834 23:49:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.834 23:49:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 [2024-11-18 23:49:16.488767] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.834 23:49:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.834 23:49:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.834 23:49:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.834 23:49:16 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.834 23:49:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 ************************************ 00:05:09.834 START TEST scheduler_create_thread 00:05:09.834 ************************************ 00:05:09.834 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:09.834 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.834 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.834 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 2 00:05:09.834 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.834 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.834 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.835 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.835 3 00:05:09.835 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.835 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.835 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.835 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 4 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 5 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 6 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 7 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 8 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 9 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 10 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.094 23:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.661 ************************************ 00:05:10.661 END TEST scheduler_create_thread 00:05:10.661 ************************************ 00:05:10.661 23:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.661 00:05:10.661 real 0m0.593s 00:05:10.661 user 0m0.014s 00:05:10.661 sys 0m0.003s 00:05:10.661 23:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.661 23:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.661 23:49:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.661 23:49:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58362 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58362 ']' 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58362 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58362 00:05:10.661 killing process with pid 58362 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58362' 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58362 00:05:10.661 23:49:17 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58362 00:05:10.921 [2024-11-18 23:49:17.573823] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:11.493 00:05:11.493 real 0m2.937s 00:05:11.493 user 0m5.606s 00:05:11.493 sys 0m0.339s 00:05:11.493 ************************************ 00:05:11.493 END TEST event_scheduler 00:05:11.493 ************************************ 00:05:11.493 23:49:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.493 23:49:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.754 23:49:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:11.754 23:49:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:11.754 23:49:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.754 23:49:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.754 23:49:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.754 ************************************ 00:05:11.754 START TEST app_repeat 00:05:11.754 ************************************ 00:05:11.754 23:49:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:11.754 Process app_repeat pid: 58446 00:05:11.754 spdk_app_start Round 0 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58446 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58446' 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58446 /var/tmp/spdk-nbd.sock 00:05:11.754 23:49:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:11.754 23:49:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58446 ']' 00:05:11.754 23:49:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.754 23:49:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.754 23:49:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.754 23:49:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.754 23:49:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.754 [2024-11-18 23:49:18.236968] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:11.754 [2024-11-18 23:49:18.237051] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58446 ] 00:05:11.754 [2024-11-18 23:49:18.391081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.015 [2024-11-18 23:49:18.489693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.016 [2024-11-18 23:49:18.489802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.583 23:49:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.583 23:49:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:12.583 23:49:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.841 Malloc0 00:05:12.841 23:49:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.100 Malloc1 00:05:13.100 23:49:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.100 23:49:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.359 /dev/nbd0 00:05:13.359 23:49:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.359 23:49:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.359 1+0 records in 00:05:13.359 1+0 records out 00:05:13.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565739 s, 7.2 MB/s 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.359 23:49:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.359 23:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.359 23:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.359 23:49:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.617 /dev/nbd1 00:05:13.617 23:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.617 23:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.617 1+0 records in 00:05:13.617 1+0 records out 00:05:13.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158156 s, 25.9 MB/s 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.617 23:49:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.617 23:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.617 23:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.617 23:49:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.617 23:49:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.617 23:49:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.877 23:49:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.877 { 00:05:13.877 "nbd_device": "/dev/nbd0", 00:05:13.877 "bdev_name": "Malloc0" 00:05:13.877 }, 00:05:13.877 { 00:05:13.877 "nbd_device": "/dev/nbd1", 00:05:13.877 "bdev_name": "Malloc1" 00:05:13.877 } 00:05:13.877 ]' 00:05:13.877 23:49:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.877 { 00:05:13.877 "nbd_device": "/dev/nbd0", 00:05:13.877 "bdev_name": "Malloc0" 00:05:13.877 }, 00:05:13.877 { 00:05:13.877 "nbd_device": "/dev/nbd1", 00:05:13.877 "bdev_name": "Malloc1" 00:05:13.877 } 00:05:13.877 ]' 00:05:13.877 23:49:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.877 23:49:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.877 /dev/nbd1' 00:05:13.877 23:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.877 23:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.877 /dev/nbd1' 00:05:13.877 23:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.877 23:49:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.878 256+0 records in 00:05:13.878 256+0 records out 00:05:13.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00997769 s, 105 MB/s 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.878 256+0 records in 00:05:13.878 256+0 records out 00:05:13.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337083 s, 31.1 MB/s 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.878 256+0 records in 00:05:13.878 256+0 records out 00:05:13.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197803 s, 53.0 MB/s 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.878 23:49:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.139 23:49:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.399 23:49:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.656 23:49:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.656 23:49:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.928 23:49:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.504 [2024-11-18 23:49:22.039891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.504 [2024-11-18 23:49:22.115206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.504 [2024-11-18 23:49:22.115380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.764 [2024-11-18 23:49:22.211801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.764 [2024-11-18 23:49:22.211859] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.300 23:49:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.300 23:49:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:18.300 spdk_app_start Round 1 00:05:18.300 23:49:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58446 /var/tmp/spdk-nbd.sock 00:05:18.300 23:49:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58446 ']' 00:05:18.300 23:49:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.300 23:49:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.300 23:49:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.300 23:49:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.300 23:49:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.300 23:49:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.300 23:49:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:18.300 23:49:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.300 Malloc0 00:05:18.300 23:49:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.561 Malloc1 00:05:18.561 23:49:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.561 23:49:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.822 /dev/nbd0 00:05:18.822 23:49:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.822 23:49:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.822 1+0 records in 00:05:18.822 1+0 records out 00:05:18.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357918 s, 11.4 MB/s 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.822 23:49:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.822 23:49:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.822 23:49:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.822 23:49:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.822 /dev/nbd1 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.082 1+0 records in 00:05:19.082 1+0 records out 00:05:19.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149966 s, 27.3 MB/s 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:19.082 23:49:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.082 { 00:05:19.082 "nbd_device": "/dev/nbd0", 00:05:19.082 "bdev_name": "Malloc0" 00:05:19.082 }, 00:05:19.082 { 00:05:19.082 "nbd_device": "/dev/nbd1", 00:05:19.082 "bdev_name": "Malloc1" 00:05:19.082 } 00:05:19.082 ]' 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.082 { 00:05:19.082 "nbd_device": "/dev/nbd0", 00:05:19.082 "bdev_name": "Malloc0" 00:05:19.082 }, 00:05:19.082 { 00:05:19.082 "nbd_device": "/dev/nbd1", 00:05:19.082 "bdev_name": "Malloc1" 00:05:19.082 } 00:05:19.082 ]' 00:05:19.082 23:49:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.342 23:49:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.343 /dev/nbd1' 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.343 /dev/nbd1' 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.343 256+0 records in 00:05:19.343 256+0 records out 00:05:19.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00698296 s, 150 MB/s 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.343 256+0 records in 00:05:19.343 256+0 records out 00:05:19.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161327 s, 65.0 MB/s 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.343 256+0 records in 00:05:19.343 256+0 records out 00:05:19.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164839 s, 63.6 MB/s 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.343 23:49:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.601 23:49:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.860 23:49:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.860 23:49:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.118 23:49:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.683 [2024-11-18 23:49:27.276100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.683 [2024-11-18 23:49:27.348112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.683 [2024-11-18 23:49:27.348156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.945 [2024-11-18 23:49:27.449209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.945 [2024-11-18 23:49:27.449259] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.475 23:49:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.475 spdk_app_start Round 2 00:05:23.475 23:49:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:23.475 23:49:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58446 /var/tmp/spdk-nbd.sock 00:05:23.475 23:49:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58446 ']' 00:05:23.475 23:49:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.475 23:49:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.475 23:49:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.475 23:49:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.475 23:49:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.475 23:49:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.475 23:49:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:23.475 23:49:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.733 Malloc0 00:05:23.733 23:49:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.733 Malloc1 00:05:23.733 23:49:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.733 23:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.991 23:49:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.991 /dev/nbd0 00:05:23.991 23:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.991 23:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.991 1+0 records in 00:05:23.991 1+0 records out 00:05:23.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019216 s, 21.3 MB/s 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.991 23:49:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.991 23:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.991 23:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.991 23:49:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.250 /dev/nbd1 00:05:24.250 23:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.250 23:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.250 1+0 records in 00:05:24.250 1+0 records out 00:05:24.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252674 s, 16.2 MB/s 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:24.250 23:49:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:24.250 23:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.250 23:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.251 23:49:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.251 23:49:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.251 23:49:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.512 { 00:05:24.512 "nbd_device": "/dev/nbd0", 00:05:24.512 "bdev_name": "Malloc0" 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "nbd_device": "/dev/nbd1", 00:05:24.512 "bdev_name": "Malloc1" 00:05:24.512 } 00:05:24.512 ]' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.512 { 00:05:24.512 "nbd_device": "/dev/nbd0", 00:05:24.512 "bdev_name": "Malloc0" 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "nbd_device": "/dev/nbd1", 00:05:24.512 "bdev_name": "Malloc1" 00:05:24.512 } 00:05:24.512 ]' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.512 /dev/nbd1' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.512 /dev/nbd1' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.512 256+0 records in 00:05:24.512 256+0 records out 00:05:24.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661404 s, 159 MB/s 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.512 256+0 records in 00:05:24.512 256+0 records out 00:05:24.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163551 s, 64.1 MB/s 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.512 256+0 records in 00:05:24.512 256+0 records out 00:05:24.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165611 s, 63.3 MB/s 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.512 23:49:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.088 23:49:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.346 23:49:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.346 23:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.346 23:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.346 23:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.346 23:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.346 23:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.346 23:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.346 23:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.347 23:49:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.347 23:49:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.347 23:49:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.347 23:49:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.347 23:49:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.605 23:49:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.172 [2024-11-18 23:49:32.802797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.430 [2024-11-18 23:49:32.874383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.430 [2024-11-18 23:49:32.874523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.430 [2024-11-18 23:49:32.976558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.430 [2024-11-18 23:49:32.976619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.960 23:49:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58446 /var/tmp/spdk-nbd.sock 00:05:28.960 23:49:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58446 ']' 00:05:28.960 23:49:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.960 23:49:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.961 23:49:35 event.app_repeat -- event/event.sh@39 -- # killprocess 58446 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58446 ']' 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58446 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58446 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.961 killing process with pid 58446 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58446' 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58446 00:05:28.961 23:49:35 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58446 00:05:29.528 spdk_app_start is called in Round 0. 00:05:29.528 Shutdown signal received, stop current app iteration 00:05:29.528 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:29.528 spdk_app_start is called in Round 1. 00:05:29.528 Shutdown signal received, stop current app iteration 00:05:29.528 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:29.528 spdk_app_start is called in Round 2. 00:05:29.528 Shutdown signal received, stop current app iteration 00:05:29.528 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:29.528 spdk_app_start is called in Round 3. 00:05:29.528 Shutdown signal received, stop current app iteration 00:05:29.528 23:49:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:29.528 23:49:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:29.528 00:05:29.528 real 0m17.795s 00:05:29.528 user 0m39.093s 00:05:29.528 sys 0m2.054s 00:05:29.528 23:49:35 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.528 23:49:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.528 ************************************ 00:05:29.528 END TEST app_repeat 00:05:29.528 ************************************ 00:05:29.528 23:49:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:29.528 23:49:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.528 23:49:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.529 23:49:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.529 23:49:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.529 ************************************ 00:05:29.529 START TEST cpu_locks 00:05:29.529 ************************************ 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.529 * Looking for test storage... 00:05:29.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.529 23:49:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.529 --rc genhtml_branch_coverage=1 00:05:29.529 --rc genhtml_function_coverage=1 00:05:29.529 --rc genhtml_legend=1 00:05:29.529 --rc geninfo_all_blocks=1 00:05:29.529 --rc geninfo_unexecuted_blocks=1 00:05:29.529 00:05:29.529 ' 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.529 --rc genhtml_branch_coverage=1 00:05:29.529 --rc genhtml_function_coverage=1 00:05:29.529 --rc genhtml_legend=1 00:05:29.529 --rc geninfo_all_blocks=1 00:05:29.529 --rc geninfo_unexecuted_blocks=1 00:05:29.529 00:05:29.529 ' 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.529 --rc genhtml_branch_coverage=1 00:05:29.529 --rc genhtml_function_coverage=1 00:05:29.529 --rc genhtml_legend=1 00:05:29.529 --rc geninfo_all_blocks=1 00:05:29.529 --rc geninfo_unexecuted_blocks=1 00:05:29.529 00:05:29.529 ' 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.529 --rc genhtml_branch_coverage=1 00:05:29.529 --rc genhtml_function_coverage=1 00:05:29.529 --rc genhtml_legend=1 00:05:29.529 --rc geninfo_all_blocks=1 00:05:29.529 --rc geninfo_unexecuted_blocks=1 00:05:29.529 00:05:29.529 ' 00:05:29.529 23:49:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:29.529 23:49:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:29.529 23:49:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:29.529 23:49:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.529 23:49:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.529 ************************************ 00:05:29.529 START TEST default_locks 00:05:29.529 ************************************ 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58871 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58871 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58871 ']' 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.529 23:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.788 [2024-11-18 23:49:36.249373] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:29.788 [2024-11-18 23:49:36.249757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58871 ] 00:05:29.788 [2024-11-18 23:49:36.401793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.046 [2024-11-18 23:49:36.499916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.612 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.612 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:30.612 23:49:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58871 00:05:30.612 23:49:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58871 00:05:30.612 23:49:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58871 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58871 ']' 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58871 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58871 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.870 killing process with pid 58871 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58871' 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58871 00:05:30.870 23:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58871 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58871 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58871 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58871 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58871 ']' 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.244 ERROR: process (pid: 58871) is no longer running 00:05:32.244 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58871) - No such process 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.244 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.245 00:05:32.245 real 0m2.377s 00:05:32.245 user 0m2.384s 00:05:32.245 sys 0m0.443s 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.245 23:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.245 ************************************ 00:05:32.245 END TEST default_locks 00:05:32.245 ************************************ 00:05:32.245 23:49:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:32.245 23:49:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.245 23:49:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.245 23:49:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.245 ************************************ 00:05:32.245 START TEST default_locks_via_rpc 00:05:32.245 ************************************ 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58935 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58935 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58935 ']' 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.245 23:49:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.245 [2024-11-18 23:49:38.682338] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:32.245 [2024-11-18 23:49:38.682459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58935 ] 00:05:32.245 [2024-11-18 23:49:38.837577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.245 [2024-11-18 23:49:38.916911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.810 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.067 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.067 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58935 00:05:33.067 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58935 00:05:33.067 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.067 23:49:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58935 00:05:33.067 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58935 ']' 00:05:33.067 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58935 00:05:33.068 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:33.068 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.068 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58935 00:05:33.068 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.068 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.068 killing process with pid 58935 00:05:33.068 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58935' 00:05:33.068 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58935 00:05:33.068 23:49:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58935 00:05:34.442 00:05:34.442 real 0m2.273s 00:05:34.442 user 0m2.285s 00:05:34.442 sys 0m0.418s 00:05:34.442 23:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.442 ************************************ 00:05:34.442 END TEST default_locks_via_rpc 00:05:34.442 ************************************ 00:05:34.442 23:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.442 23:49:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:34.442 23:49:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.442 23:49:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.442 23:49:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.442 ************************************ 00:05:34.442 START TEST non_locking_app_on_locked_coremask 00:05:34.442 ************************************ 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58987 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58987 /var/tmp/spdk.sock 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58987 ']' 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.442 23:49:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.442 [2024-11-18 23:49:40.995331] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:34.443 [2024-11-18 23:49:40.995456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58987 ] 00:05:34.743 [2024-11-18 23:49:41.150651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.743 [2024-11-18 23:49:41.230250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59003 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59003 /var/tmp/spdk2.sock 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59003 ']' 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.311 23:49:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.311 [2024-11-18 23:49:41.855479] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:35.311 [2024-11-18 23:49:41.855606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59003 ] 00:05:35.571 [2024-11-18 23:49:42.026216] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.571 [2024-11-18 23:49:42.026263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.571 [2024-11-18 23:49:42.188500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.508 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.508 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.508 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58987 00:05:36.508 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58987 00:05:36.508 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.767 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58987 00:05:36.767 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58987 ']' 00:05:36.767 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58987 00:05:36.767 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.767 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.767 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58987 00:05:37.027 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.027 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.027 killing process with pid 58987 00:05:37.027 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58987' 00:05:37.027 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58987 00:05:37.027 23:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58987 00:05:39.570 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59003 00:05:39.570 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59003 ']' 00:05:39.570 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59003 00:05:39.570 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.570 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.570 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59003 00:05:39.570 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.570 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.571 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59003' 00:05:39.571 killing process with pid 59003 00:05:39.571 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59003 00:05:39.571 23:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59003 00:05:40.511 00:05:40.511 real 0m6.092s 00:05:40.511 user 0m6.329s 00:05:40.511 sys 0m0.812s 00:05:40.511 23:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.511 ************************************ 00:05:40.511 END TEST non_locking_app_on_locked_coremask 00:05:40.511 ************************************ 00:05:40.511 23:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.511 23:49:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:40.511 23:49:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.511 23:49:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.511 23:49:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.511 ************************************ 00:05:40.511 START TEST locking_app_on_unlocked_coremask 00:05:40.511 ************************************ 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59094 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59094 /var/tmp/spdk.sock 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59094 ']' 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:40.511 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.511 [2024-11-18 23:49:47.127577] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:40.511 [2024-11-18 23:49:47.127683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59094 ] 00:05:40.771 [2024-11-18 23:49:47.277859] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.771 [2024-11-18 23:49:47.277897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.771 [2024-11-18 23:49:47.358106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59110 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59110 /var/tmp/spdk2.sock 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59110 ']' 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.344 23:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.606 [2024-11-18 23:49:48.041276] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:41.606 [2024-11-18 23:49:48.041392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59110 ] 00:05:41.606 [2024-11-18 23:49:48.204692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.867 [2024-11-18 23:49:48.365027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.811 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.811 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.811 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59110 00:05:42.811 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59110 00:05:42.811 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59094 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59094 ']' 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59094 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59094 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.068 killing process with pid 59094 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59094' 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59094 00:05:43.068 23:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59094 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59110 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59110 ']' 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59110 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59110 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.595 killing process with pid 59110 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59110' 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59110 00:05:45.595 23:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59110 00:05:46.530 00:05:46.530 real 0m6.025s 00:05:46.530 user 0m6.311s 00:05:46.530 sys 0m0.767s 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.530 ************************************ 00:05:46.530 END TEST locking_app_on_unlocked_coremask 00:05:46.530 ************************************ 00:05:46.530 23:49:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.530 23:49:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.530 23:49:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.530 23:49:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.530 ************************************ 00:05:46.530 START TEST locking_app_on_locked_coremask 00:05:46.530 ************************************ 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59201 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59201 /var/tmp/spdk.sock 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59201 ']' 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.530 23:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.530 [2024-11-18 23:49:53.210582] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:46.530 [2024-11-18 23:49:53.210670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59201 ] 00:05:46.788 [2024-11-18 23:49:53.361631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.788 [2024-11-18 23:49:53.441890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59219 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59219 /var/tmp/spdk2.sock 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59219 /var/tmp/spdk2.sock 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59219 /var/tmp/spdk2.sock 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59219 ']' 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.726 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.727 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.727 [2024-11-18 23:49:54.130299] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:47.727 [2024-11-18 23:49:54.130419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59219 ] 00:05:47.727 [2024-11-18 23:49:54.294726] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59201 has claimed it. 00:05:47.727 [2024-11-18 23:49:54.294778] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.299 ERROR: process (pid: 59219) is no longer running 00:05:48.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59219) - No such process 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59201 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59201 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59201 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59201 ']' 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59201 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59201 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.299 killing process with pid 59201 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59201' 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59201 00:05:48.299 23:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59201 00:05:49.681 00:05:49.681 real 0m2.988s 00:05:49.681 user 0m3.209s 00:05:49.681 sys 0m0.522s 00:05:49.681 23:49:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.681 ************************************ 00:05:49.681 END TEST locking_app_on_locked_coremask 00:05:49.681 ************************************ 00:05:49.681 23:49:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.681 23:49:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:49.681 23:49:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.681 23:49:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.681 23:49:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.681 ************************************ 00:05:49.681 START TEST locking_overlapped_coremask 00:05:49.681 ************************************ 00:05:49.681 23:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:49.681 23:49:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59272 00:05:49.681 23:49:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59272 /var/tmp/spdk.sock 00:05:49.681 23:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59272 ']' 00:05:49.681 23:49:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:49.681 23:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.681 23:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.682 23:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.682 23:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.682 23:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.682 [2024-11-18 23:49:56.257601] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:49.682 [2024-11-18 23:49:56.257732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59272 ] 00:05:49.940 [2024-11-18 23:49:56.415409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.940 [2024-11-18 23:49:56.498535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.940 [2024-11-18 23:49:56.498802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.940 [2024-11-18 23:49:56.498831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59290 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59290 /var/tmp/spdk2.sock 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59290 /var/tmp/spdk2.sock 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59290 /var/tmp/spdk2.sock 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59290 ']' 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.506 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.506 [2024-11-18 23:49:57.144822] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:50.506 [2024-11-18 23:49:57.144936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59290 ] 00:05:50.765 [2024-11-18 23:49:57.317197] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59272 has claimed it. 00:05:50.765 [2024-11-18 23:49:57.317247] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.330 ERROR: process (pid: 59290) is no longer running 00:05:51.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59290) - No such process 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59272 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59272 ']' 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59272 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59272 00:05:51.330 killing process with pid 59272 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59272' 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59272 00:05:51.330 23:49:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59272 00:05:52.706 ************************************ 00:05:52.706 END TEST locking_overlapped_coremask 00:05:52.706 ************************************ 00:05:52.706 00:05:52.706 real 0m2.787s 00:05:52.706 user 0m7.582s 00:05:52.706 sys 0m0.416s 00:05:52.706 23:49:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.706 23:49:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.706 23:49:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:52.706 23:49:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.706 23:49:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.706 23:49:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.706 ************************************ 00:05:52.706 START TEST locking_overlapped_coremask_via_rpc 00:05:52.706 ************************************ 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59338 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59338 /var/tmp/spdk.sock 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59338 ']' 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:52.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.706 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.706 [2024-11-18 23:49:59.070695] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:52.706 [2024-11-18 23:49:59.071251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:05:52.706 [2024-11-18 23:49:59.211944] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.706 [2024-11-18 23:49:59.211980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.706 [2024-11-18 23:49:59.292900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.706 [2024-11-18 23:49:59.293208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.706 [2024-11-18 23:49:59.293223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59356 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59356 /var/tmp/spdk2.sock 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59356 ']' 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.274 23:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.274 [2024-11-18 23:49:59.940969] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:53.274 [2024-11-18 23:49:59.941296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59356 ] 00:05:53.533 [2024-11-18 23:50:00.115295] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.533 [2024-11-18 23:50:00.115339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.791 [2024-11-18 23:50:00.322564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.791 [2024-11-18 23:50:00.326290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.791 [2024-11-18 23:50:00.326312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.168 [2024-11-18 23:50:01.577276] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59338 has claimed it. 00:05:55.168 request: 00:05:55.168 { 00:05:55.168 "method": "framework_enable_cpumask_locks", 00:05:55.168 "req_id": 1 00:05:55.168 } 00:05:55.168 Got JSON-RPC error response 00:05:55.168 response: 00:05:55.168 { 00:05:55.168 "code": -32603, 00:05:55.168 "message": "Failed to claim CPU core: 2" 00:05:55.168 } 00:05:55.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59338 /var/tmp/spdk.sock 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59338 ']' 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59356 /var/tmp/spdk2.sock 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59356 ']' 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.168 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.427 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.427 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.427 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:55.427 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.427 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.427 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.427 00:05:55.427 real 0m2.954s 00:05:55.427 user 0m1.049s 00:05:55.427 sys 0m0.127s 00:05:55.427 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.427 23:50:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.427 ************************************ 00:05:55.427 END TEST locking_overlapped_coremask_via_rpc 00:05:55.427 ************************************ 00:05:55.427 23:50:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:55.427 23:50:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59338 ]] 00:05:55.427 23:50:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59338 00:05:55.427 23:50:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59338 ']' 00:05:55.427 23:50:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59338 00:05:55.427 23:50:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:55.427 23:50:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.427 23:50:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59338 00:05:55.427 killing process with pid 59338 00:05:55.427 23:50:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.427 23:50:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.427 23:50:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59338' 00:05:55.427 23:50:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59338 00:05:55.427 23:50:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59338 00:05:56.907 23:50:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59356 ]] 00:05:56.907 23:50:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59356 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59356 ']' 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59356 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59356 00:05:56.907 killing process with pid 59356 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59356' 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59356 00:05:56.907 23:50:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59356 00:05:58.284 23:50:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.284 Process with pid 59338 is not found 00:05:58.284 23:50:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:58.284 23:50:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59338 ]] 00:05:58.284 23:50:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59338 00:05:58.284 23:50:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59338 ']' 00:05:58.284 23:50:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59338 00:05:58.284 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59338) - No such process 00:05:58.284 23:50:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59338 is not found' 00:05:58.284 23:50:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59356 ]] 00:05:58.284 23:50:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59356 00:05:58.284 23:50:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59356 ']' 00:05:58.284 Process with pid 59356 is not found 00:05:58.284 23:50:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59356 00:05:58.284 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59356) - No such process 00:05:58.284 23:50:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59356 is not found' 00:05:58.284 23:50:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.284 00:05:58.284 real 0m28.655s 00:05:58.284 user 0m50.970s 00:05:58.284 sys 0m4.278s 00:05:58.284 ************************************ 00:05:58.284 END TEST cpu_locks 00:05:58.284 ************************************ 00:05:58.284 23:50:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.284 23:50:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.284 ************************************ 00:05:58.284 END TEST event 00:05:58.284 ************************************ 00:05:58.284 00:05:58.284 real 0m54.054s 00:05:58.284 user 1m42.542s 00:05:58.284 sys 0m7.118s 00:05:58.284 23:50:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.284 23:50:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.284 23:50:04 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.284 23:50:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.284 23:50:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.284 23:50:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.284 ************************************ 00:05:58.284 START TEST thread 00:05:58.284 ************************************ 00:05:58.284 23:50:04 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.284 * Looking for test storage... 00:05:58.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:58.284 23:50:04 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.284 23:50:04 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.284 23:50:04 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.284 23:50:04 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.284 23:50:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.284 23:50:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.284 23:50:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.284 23:50:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.284 23:50:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.284 23:50:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.284 23:50:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.284 23:50:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.284 23:50:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.284 23:50:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.284 23:50:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.284 23:50:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:58.284 23:50:04 thread -- scripts/common.sh@345 -- # : 1 00:05:58.284 23:50:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.284 23:50:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.285 23:50:04 thread -- scripts/common.sh@365 -- # decimal 1 00:05:58.285 23:50:04 thread -- scripts/common.sh@353 -- # local d=1 00:05:58.285 23:50:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.285 23:50:04 thread -- scripts/common.sh@355 -- # echo 1 00:05:58.285 23:50:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.285 23:50:04 thread -- scripts/common.sh@366 -- # decimal 2 00:05:58.285 23:50:04 thread -- scripts/common.sh@353 -- # local d=2 00:05:58.285 23:50:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.285 23:50:04 thread -- scripts/common.sh@355 -- # echo 2 00:05:58.285 23:50:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.285 23:50:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.285 23:50:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.285 23:50:04 thread -- scripts/common.sh@368 -- # return 0 00:05:58.285 23:50:04 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.285 23:50:04 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.285 --rc genhtml_branch_coverage=1 00:05:58.285 --rc genhtml_function_coverage=1 00:05:58.285 --rc genhtml_legend=1 00:05:58.285 --rc geninfo_all_blocks=1 00:05:58.285 --rc geninfo_unexecuted_blocks=1 00:05:58.285 00:05:58.285 ' 00:05:58.285 23:50:04 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.285 --rc genhtml_branch_coverage=1 00:05:58.285 --rc genhtml_function_coverage=1 00:05:58.285 --rc genhtml_legend=1 00:05:58.285 --rc geninfo_all_blocks=1 00:05:58.285 --rc geninfo_unexecuted_blocks=1 00:05:58.285 00:05:58.285 ' 00:05:58.285 23:50:04 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.285 --rc genhtml_branch_coverage=1 00:05:58.285 --rc genhtml_function_coverage=1 00:05:58.285 --rc genhtml_legend=1 00:05:58.285 --rc geninfo_all_blocks=1 00:05:58.285 --rc geninfo_unexecuted_blocks=1 00:05:58.285 00:05:58.285 ' 00:05:58.285 23:50:04 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.285 --rc genhtml_branch_coverage=1 00:05:58.285 --rc genhtml_function_coverage=1 00:05:58.285 --rc genhtml_legend=1 00:05:58.285 --rc geninfo_all_blocks=1 00:05:58.285 --rc geninfo_unexecuted_blocks=1 00:05:58.285 00:05:58.285 ' 00:05:58.285 23:50:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.285 23:50:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:58.285 23:50:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.285 23:50:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.285 ************************************ 00:05:58.285 START TEST thread_poller_perf 00:05:58.285 ************************************ 00:05:58.285 23:50:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.285 [2024-11-18 23:50:04.932097] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:58.285 [2024-11-18 23:50:04.932327] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59510 ] 00:05:58.544 [2024-11-18 23:50:05.086629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.544 [2024-11-18 23:50:05.200481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.544 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:59.918 [2024-11-18T23:50:06.610Z] ====================================== 00:05:59.918 [2024-11-18T23:50:06.610Z] busy:2616025928 (cyc) 00:05:59.918 [2024-11-18T23:50:06.610Z] total_run_count: 307000 00:05:59.918 [2024-11-18T23:50:06.610Z] tsc_hz: 2600000000 (cyc) 00:05:59.918 [2024-11-18T23:50:06.610Z] ====================================== 00:05:59.918 [2024-11-18T23:50:06.610Z] poller_cost: 8521 (cyc), 3277 (nsec) 00:05:59.918 ************************************ 00:05:59.918 END TEST thread_poller_perf 00:05:59.918 ************************************ 00:05:59.918 00:05:59.918 real 0m1.443s 00:05:59.918 user 0m1.276s 00:05:59.918 sys 0m0.060s 00:05:59.918 23:50:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.918 23:50:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.918 23:50:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.918 23:50:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:59.918 23:50:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.918 23:50:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.918 ************************************ 00:05:59.918 START TEST thread_poller_perf 00:05:59.918 ************************************ 00:05:59.918 23:50:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.918 [2024-11-18 23:50:06.420026] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:59.918 [2024-11-18 23:50:06.420246] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59552 ] 00:05:59.918 [2024-11-18 23:50:06.569056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.176 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:00.176 [2024-11-18 23:50:06.667873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.111 [2024-11-18T23:50:07.803Z] ====================================== 00:06:01.111 [2024-11-18T23:50:07.803Z] busy:2602605098 (cyc) 00:06:01.111 [2024-11-18T23:50:07.803Z] total_run_count: 5303000 00:06:01.111 [2024-11-18T23:50:07.803Z] tsc_hz: 2600000000 (cyc) 00:06:01.111 [2024-11-18T23:50:07.803Z] ====================================== 00:06:01.111 [2024-11-18T23:50:07.803Z] poller_cost: 490 (cyc), 188 (nsec) 00:06:01.111 00:06:01.111 real 0m1.407s 00:06:01.111 user 0m1.237s 00:06:01.111 sys 0m0.064s 00:06:01.111 23:50:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.111 23:50:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.111 ************************************ 00:06:01.111 END TEST thread_poller_perf 00:06:01.111 ************************************ 00:06:01.370 23:50:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:01.370 ************************************ 00:06:01.370 END TEST thread 00:06:01.370 ************************************ 00:06:01.370 00:06:01.370 real 0m3.078s 00:06:01.370 user 0m2.607s 00:06:01.370 sys 0m0.263s 00:06:01.370 23:50:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.370 23:50:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.370 23:50:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:01.370 23:50:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.370 23:50:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.370 23:50:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.370 23:50:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.370 ************************************ 00:06:01.370 START TEST app_cmdline 00:06:01.370 ************************************ 00:06:01.370 23:50:07 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.370 * Looking for test storage... 00:06:01.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:01.370 23:50:07 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.370 23:50:07 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.370 23:50:07 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.370 23:50:07 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:01.370 23:50:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.370 23:50:08 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:01.370 23:50:08 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.370 23:50:08 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.370 --rc genhtml_branch_coverage=1 00:06:01.370 --rc genhtml_function_coverage=1 00:06:01.370 --rc genhtml_legend=1 00:06:01.370 --rc geninfo_all_blocks=1 00:06:01.371 --rc geninfo_unexecuted_blocks=1 00:06:01.371 00:06:01.371 ' 00:06:01.371 23:50:08 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.371 --rc genhtml_branch_coverage=1 00:06:01.371 --rc genhtml_function_coverage=1 00:06:01.371 --rc genhtml_legend=1 00:06:01.371 --rc geninfo_all_blocks=1 00:06:01.371 --rc geninfo_unexecuted_blocks=1 00:06:01.371 00:06:01.371 ' 00:06:01.371 23:50:08 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.371 --rc genhtml_branch_coverage=1 00:06:01.371 --rc genhtml_function_coverage=1 00:06:01.371 --rc genhtml_legend=1 00:06:01.371 --rc geninfo_all_blocks=1 00:06:01.371 --rc geninfo_unexecuted_blocks=1 00:06:01.371 00:06:01.371 ' 00:06:01.371 23:50:08 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.371 --rc genhtml_branch_coverage=1 00:06:01.371 --rc genhtml_function_coverage=1 00:06:01.371 --rc genhtml_legend=1 00:06:01.371 --rc geninfo_all_blocks=1 00:06:01.371 --rc geninfo_unexecuted_blocks=1 00:06:01.371 00:06:01.371 ' 00:06:01.371 23:50:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:01.371 23:50:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59636 00:06:01.371 23:50:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59636 00:06:01.371 23:50:08 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:01.371 23:50:08 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59636 ']' 00:06:01.371 23:50:08 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.371 23:50:08 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.371 23:50:08 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.371 23:50:08 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.371 23:50:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.629 [2024-11-18 23:50:08.083450] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:01.629 [2024-11-18 23:50:08.083721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:06:01.629 [2024-11-18 23:50:08.238604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.888 [2024-11-18 23:50:08.336075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.456 23:50:08 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.456 23:50:08 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:02.456 23:50:08 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:02.456 { 00:06:02.456 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:06:02.456 "fields": { 00:06:02.456 "major": 25, 00:06:02.456 "minor": 1, 00:06:02.456 "patch": 0, 00:06:02.456 "suffix": "-pre", 00:06:02.456 "commit": "d47eb51c9" 00:06:02.456 } 00:06:02.456 } 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:02.456 23:50:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:02.456 23:50:09 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.713 request: 00:06:02.713 { 00:06:02.713 "method": "env_dpdk_get_mem_stats", 00:06:02.713 "req_id": 1 00:06:02.713 } 00:06:02.713 Got JSON-RPC error response 00:06:02.713 response: 00:06:02.713 { 00:06:02.713 "code": -32601, 00:06:02.713 "message": "Method not found" 00:06:02.713 } 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.713 23:50:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59636 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59636 ']' 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59636 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59636 00:06:02.713 killing process with pid 59636 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59636' 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@973 -- # kill 59636 00:06:02.713 23:50:09 app_cmdline -- common/autotest_common.sh@978 -- # wait 59636 00:06:04.088 00:06:04.088 real 0m2.655s 00:06:04.088 user 0m2.831s 00:06:04.088 sys 0m0.450s 00:06:04.088 23:50:10 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.088 ************************************ 00:06:04.088 END TEST app_cmdline 00:06:04.088 ************************************ 00:06:04.088 23:50:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.088 23:50:10 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.088 23:50:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.088 23:50:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.088 23:50:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.088 ************************************ 00:06:04.088 START TEST version 00:06:04.088 ************************************ 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.088 * Looking for test storage... 00:06:04.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.088 23:50:10 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.088 23:50:10 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.088 23:50:10 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.088 23:50:10 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.088 23:50:10 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.088 23:50:10 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.088 23:50:10 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.088 23:50:10 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.088 23:50:10 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.088 23:50:10 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.088 23:50:10 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.088 23:50:10 version -- scripts/common.sh@344 -- # case "$op" in 00:06:04.088 23:50:10 version -- scripts/common.sh@345 -- # : 1 00:06:04.088 23:50:10 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.088 23:50:10 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.088 23:50:10 version -- scripts/common.sh@365 -- # decimal 1 00:06:04.088 23:50:10 version -- scripts/common.sh@353 -- # local d=1 00:06:04.088 23:50:10 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.088 23:50:10 version -- scripts/common.sh@355 -- # echo 1 00:06:04.088 23:50:10 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.088 23:50:10 version -- scripts/common.sh@366 -- # decimal 2 00:06:04.088 23:50:10 version -- scripts/common.sh@353 -- # local d=2 00:06:04.088 23:50:10 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.088 23:50:10 version -- scripts/common.sh@355 -- # echo 2 00:06:04.088 23:50:10 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.088 23:50:10 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.088 23:50:10 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.088 23:50:10 version -- scripts/common.sh@368 -- # return 0 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.088 --rc genhtml_branch_coverage=1 00:06:04.088 --rc genhtml_function_coverage=1 00:06:04.088 --rc genhtml_legend=1 00:06:04.088 --rc geninfo_all_blocks=1 00:06:04.088 --rc geninfo_unexecuted_blocks=1 00:06:04.088 00:06:04.088 ' 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.088 --rc genhtml_branch_coverage=1 00:06:04.088 --rc genhtml_function_coverage=1 00:06:04.088 --rc genhtml_legend=1 00:06:04.088 --rc geninfo_all_blocks=1 00:06:04.088 --rc geninfo_unexecuted_blocks=1 00:06:04.088 00:06:04.088 ' 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.088 --rc genhtml_branch_coverage=1 00:06:04.088 --rc genhtml_function_coverage=1 00:06:04.088 --rc genhtml_legend=1 00:06:04.088 --rc geninfo_all_blocks=1 00:06:04.088 --rc geninfo_unexecuted_blocks=1 00:06:04.088 00:06:04.088 ' 00:06:04.088 23:50:10 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.088 --rc genhtml_branch_coverage=1 00:06:04.088 --rc genhtml_function_coverage=1 00:06:04.088 --rc genhtml_legend=1 00:06:04.088 --rc geninfo_all_blocks=1 00:06:04.088 --rc geninfo_unexecuted_blocks=1 00:06:04.088 00:06:04.088 ' 00:06:04.088 23:50:10 version -- app/version.sh@17 -- # get_header_version major 00:06:04.088 23:50:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.088 23:50:10 version -- app/version.sh@14 -- # cut -f2 00:06:04.088 23:50:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.088 23:50:10 version -- app/version.sh@17 -- # major=25 00:06:04.088 23:50:10 version -- app/version.sh@18 -- # get_header_version minor 00:06:04.089 23:50:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.089 23:50:10 version -- app/version.sh@14 -- # cut -f2 00:06:04.089 23:50:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.089 23:50:10 version -- app/version.sh@18 -- # minor=1 00:06:04.089 23:50:10 version -- app/version.sh@19 -- # get_header_version patch 00:06:04.089 23:50:10 version -- app/version.sh@14 -- # cut -f2 00:06:04.089 23:50:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.089 23:50:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.089 23:50:10 version -- app/version.sh@19 -- # patch=0 00:06:04.089 23:50:10 version -- app/version.sh@20 -- # get_header_version suffix 00:06:04.089 23:50:10 version -- app/version.sh@14 -- # cut -f2 00:06:04.089 23:50:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.089 23:50:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.089 23:50:10 version -- app/version.sh@20 -- # suffix=-pre 00:06:04.089 23:50:10 version -- app/version.sh@22 -- # version=25.1 00:06:04.089 23:50:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:04.089 23:50:10 version -- app/version.sh@28 -- # version=25.1rc0 00:06:04.089 23:50:10 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:04.089 23:50:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:04.089 23:50:10 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:04.089 23:50:10 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:04.089 00:06:04.089 real 0m0.205s 00:06:04.089 user 0m0.124s 00:06:04.089 sys 0m0.104s 00:06:04.089 ************************************ 00:06:04.089 END TEST version 00:06:04.089 ************************************ 00:06:04.089 23:50:10 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.089 23:50:10 version -- common/autotest_common.sh@10 -- # set +x 00:06:04.348 23:50:10 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:04.348 23:50:10 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:04.348 23:50:10 -- spdk/autotest.sh@194 -- # uname -s 00:06:04.348 23:50:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:04.348 23:50:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.348 23:50:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.348 23:50:10 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:04.348 23:50:10 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:04.348 23:50:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:04.348 23:50:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.348 23:50:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.348 ************************************ 00:06:04.348 START TEST blockdev_nvme 00:06:04.348 ************************************ 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:04.348 * Looking for test storage... 00:06:04.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.348 23:50:10 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.348 --rc genhtml_branch_coverage=1 00:06:04.348 --rc genhtml_function_coverage=1 00:06:04.348 --rc genhtml_legend=1 00:06:04.348 --rc geninfo_all_blocks=1 00:06:04.348 --rc geninfo_unexecuted_blocks=1 00:06:04.348 00:06:04.348 ' 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.348 --rc genhtml_branch_coverage=1 00:06:04.348 --rc genhtml_function_coverage=1 00:06:04.348 --rc genhtml_legend=1 00:06:04.348 --rc geninfo_all_blocks=1 00:06:04.348 --rc geninfo_unexecuted_blocks=1 00:06:04.348 00:06:04.348 ' 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.348 --rc genhtml_branch_coverage=1 00:06:04.348 --rc genhtml_function_coverage=1 00:06:04.348 --rc genhtml_legend=1 00:06:04.348 --rc geninfo_all_blocks=1 00:06:04.348 --rc geninfo_unexecuted_blocks=1 00:06:04.348 00:06:04.348 ' 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.348 --rc genhtml_branch_coverage=1 00:06:04.348 --rc genhtml_function_coverage=1 00:06:04.348 --rc genhtml_legend=1 00:06:04.348 --rc geninfo_all_blocks=1 00:06:04.348 --rc geninfo_unexecuted_blocks=1 00:06:04.348 00:06:04.348 ' 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:04.348 23:50:10 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59802 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59802 00:06:04.348 23:50:10 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59802 ']' 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.348 23:50:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.348 [2024-11-18 23:50:11.036101] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:04.348 [2024-11-18 23:50:11.036389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59802 ] 00:06:04.607 [2024-11-18 23:50:11.190473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.607 [2024-11-18 23:50:11.286805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.173 23:50:11 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.173 23:50:11 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:05.173 23:50:11 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:05.173 23:50:11 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:05.173 23:50:11 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:05.432 23:50:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:05.432 23:50:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:05.432 23:50:11 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:05.432 23:50:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.432 23:50:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:05.693 23:50:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:05.693 23:50:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:05.694 23:50:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "eabb8a71-9c7b-420a-b2ca-2e621cb40800"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "eabb8a71-9c7b-420a-b2ca-2e621cb40800",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "66351537-c6aa-4cac-aa3c-33fe82a222a5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "66351537-c6aa-4cac-aa3c-33fe82a222a5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7069845e-ecf4-4b40-9820-5fe3f5ae90e8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7069845e-ecf4-4b40-9820-5fe3f5ae90e8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "71bf0151-02d8-49fc-b322-fb93dddb278e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "71bf0151-02d8-49fc-b322-fb93dddb278e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1095d0d0-e2f7-4bd0-bb64-ea3e5315f98e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1095d0d0-e2f7-4bd0-bb64-ea3e5315f98e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9fe10544-b5c4-44c4-b687-76dceea22d6f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9fe10544-b5c4-44c4-b687-76dceea22d6f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:05.694 23:50:12 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:05.694 23:50:12 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:05.694 23:50:12 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:05.694 23:50:12 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59802 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59802 ']' 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59802 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59802 00:06:05.694 killing process with pid 59802 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59802' 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59802 00:06:05.694 23:50:12 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59802 00:06:07.616 23:50:13 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:07.616 23:50:13 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:07.616 23:50:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:07.616 23:50:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.616 23:50:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:07.616 ************************************ 00:06:07.616 START TEST bdev_hello_world 00:06:07.616 ************************************ 00:06:07.616 23:50:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:07.616 [2024-11-18 23:50:13.929472] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:07.616 [2024-11-18 23:50:13.929603] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59886 ] 00:06:07.616 [2024-11-18 23:50:14.086083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.616 [2024-11-18 23:50:14.185734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.183 [2024-11-18 23:50:14.696046] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:08.183 [2024-11-18 23:50:14.696093] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:08.183 [2024-11-18 23:50:14.696108] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:08.183 [2024-11-18 23:50:14.698244] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:08.183 [2024-11-18 23:50:14.698901] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:08.183 [2024-11-18 23:50:14.698929] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:08.183 [2024-11-18 23:50:14.699164] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:08.183 00:06:08.183 [2024-11-18 23:50:14.699186] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:08.750 00:06:08.750 real 0m1.430s 00:06:08.750 user 0m1.132s 00:06:08.750 sys 0m0.193s 00:06:08.750 23:50:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.750 ************************************ 00:06:08.750 END TEST bdev_hello_world 00:06:08.750 ************************************ 00:06:08.750 23:50:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:08.750 23:50:15 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:08.750 23:50:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:08.750 23:50:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.750 23:50:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.750 ************************************ 00:06:08.750 START TEST bdev_bounds 00:06:08.750 ************************************ 00:06:08.750 Process bdevio pid: 59923 00:06:08.750 23:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:08.750 23:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59923 00:06:08.750 23:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.750 23:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59923' 00:06:08.750 23:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:08.750 23:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59923 00:06:08.751 23:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59923 ']' 00:06:08.751 23:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.751 23:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.751 23:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.751 23:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.751 23:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:08.751 [2024-11-18 23:50:15.392375] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:08.751 [2024-11-18 23:50:15.392609] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:06:09.009 [2024-11-18 23:50:15.542401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.009 [2024-11-18 23:50:15.641234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.009 [2024-11-18 23:50:15.641579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.009 [2024-11-18 23:50:15.641587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.946 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.946 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:09.946 23:50:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:09.946 I/O targets: 00:06:09.946 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:09.946 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:09.946 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:09.946 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:09.946 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:09.946 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:09.946 00:06:09.946 00:06:09.946 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.946 http://cunit.sourceforge.net/ 00:06:09.946 00:06:09.946 00:06:09.946 Suite: bdevio tests on: Nvme3n1 00:06:09.946 Test: blockdev write read block ...passed 00:06:09.946 Test: blockdev write zeroes read block ...passed 00:06:09.946 Test: blockdev write zeroes read no split ...passed 00:06:09.946 Test: blockdev write zeroes read split ...passed 00:06:09.946 Test: blockdev write zeroes read split partial ...passed 00:06:09.946 Test: blockdev reset ...[2024-11-18 23:50:16.436104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:09.946 passed 00:06:09.946 Test: blockdev write read 8 blocks ...[2024-11-18 23:50:16.438871] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:09.946 passed 00:06:09.946 Test: blockdev write read size > 128k ...passed 00:06:09.946 Test: blockdev write read invalid size ...passed 00:06:09.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.946 Test: blockdev write read max offset ...passed 00:06:09.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.946 Test: blockdev writev readv 8 blocks ...passed 00:06:09.946 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.946 Test: blockdev writev readv block ...passed 00:06:09.946 Test: blockdev writev readv size > 128k ...passed 00:06:09.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.946 Test: blockdev comparev and writev ...[2024-11-18 23:50:16.445465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7c0a000 len:0x1000 00:06:09.946 [2024-11-18 23:50:16.445524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.946 passed 00:06:09.946 Test: blockdev nvme passthru rw ...passed 00:06:09.946 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:50:16.446106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.946 [2024-11-18 23:50:16.446146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.946 passed 00:06:09.946 Test: blockdev nvme admin passthru ...passed 00:06:09.946 Test: blockdev copy ...passed 00:06:09.946 Suite: bdevio tests on: Nvme2n3 00:06:09.946 Test: blockdev write read block ...passed 00:06:09.946 Test: blockdev write zeroes read block ...passed 00:06:09.946 Test: blockdev write zeroes read no split ...passed 00:06:09.946 Test: blockdev write zeroes read split ...passed 00:06:09.946 Test: blockdev write zeroes read split partial ...passed 00:06:09.946 Test: blockdev reset ...[2024-11-18 23:50:16.493944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:09.946 [2024-11-18 23:50:16.496878] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:09.946 passed 00:06:09.946 Test: blockdev write read 8 blocks ...passed 00:06:09.946 Test: blockdev write read size > 128k ...passed 00:06:09.946 Test: blockdev write read invalid size ...passed 00:06:09.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.946 Test: blockdev write read max offset ...passed 00:06:09.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.946 Test: blockdev writev readv 8 blocks ...passed 00:06:09.946 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.946 Test: blockdev writev readv block ...passed 00:06:09.946 Test: blockdev writev readv size > 128k ...passed 00:06:09.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.946 Test: blockdev comparev and writev ...[2024-11-18 23:50:16.505335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29b606000 len:0x1000 00:06:09.946 [2024-11-18 23:50:16.505807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.946 passed 00:06:09.946 Test: blockdev nvme passthru rw ...passed 00:06:09.946 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:50:16.507461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.946 [2024-11-18 23:50:16.507935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.946 passed 00:06:09.946 Test: blockdev nvme admin passthru ...passed 00:06:09.946 Test: blockdev copy ...passed 00:06:09.946 Suite: bdevio tests on: Nvme2n2 00:06:09.946 Test: blockdev write read block ...passed 00:06:09.946 Test: blockdev write zeroes read block ...passed 00:06:09.946 Test: blockdev write zeroes read no split ...passed 00:06:09.946 Test: blockdev write zeroes read split ...passed 00:06:09.946 Test: blockdev write zeroes read split partial ...passed 00:06:09.946 Test: blockdev reset ...[2024-11-18 23:50:16.561931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:09.946 passed 00:06:09.946 Test: blockdev write read 8 blocks ...[2024-11-18 23:50:16.565045] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:09.946 passed 00:06:09.946 Test: blockdev write read size > 128k ...passed 00:06:09.946 Test: blockdev write read invalid size ...passed 00:06:09.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.946 Test: blockdev write read max offset ...passed 00:06:09.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.946 Test: blockdev writev readv 8 blocks ...passed 00:06:09.946 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.946 Test: blockdev writev readv block ...passed 00:06:09.947 Test: blockdev writev readv size > 128k ...passed 00:06:09.947 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.947 Test: blockdev comparev and writev ...[2024-11-18 23:50:16.572481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7a3c000 len:0x1000 00:06:09.947 [2024-11-18 23:50:16.572606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.947 passed 00:06:09.947 Test: blockdev nvme passthru rw ...passed 00:06:09.947 Test: blockdev nvme passthru vendor specific ...passed 00:06:09.947 Test: blockdev nvme admin passthru ...[2024-11-18 23:50:16.573653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.947 [2024-11-18 23:50:16.573735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.947 passed 00:06:09.947 Test: blockdev copy ...passed 00:06:09.947 Suite: bdevio tests on: Nvme2n1 00:06:09.947 Test: blockdev write read block ...passed 00:06:09.947 Test: blockdev write zeroes read block ...passed 00:06:09.947 Test: blockdev write zeroes read no split ...passed 00:06:09.947 Test: blockdev write zeroes read split ...passed 00:06:09.947 Test: blockdev write zeroes read split partial ...passed 00:06:09.947 Test: blockdev reset ...[2024-11-18 23:50:16.630854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:09.947 [2024-11-18 23:50:16.633635] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:09.947 passed 00:06:09.947 Test: blockdev write read 8 blocks ...passed 00:06:09.947 Test: blockdev write read size > 128k ...passed 00:06:09.947 Test: blockdev write read invalid size ...passed 00:06:10.206 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:10.206 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:10.206 Test: blockdev write read max offset ...passed 00:06:10.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:10.206 Test: blockdev writev readv 8 blocks ...passed 00:06:10.206 Test: blockdev writev readv 30 x 1block ...passed 00:06:10.206 Test: blockdev writev readv block ...passed 00:06:10.206 Test: blockdev writev readv size > 128k ...passed 00:06:10.206 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:10.206 Test: blockdev comparev and writev ...[2024-11-18 23:50:16.640667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7a38000 len:0x1000 00:06:10.206 [2024-11-18 23:50:16.640818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:10.206 passed 00:06:10.206 Test: blockdev nvme passthru rw ...passed 00:06:10.206 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:50:16.641743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:10.206 [2024-11-18 23:50:16.641799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:10.206 passed 00:06:10.206 Test: blockdev nvme admin passthru ...passed 00:06:10.206 Test: blockdev copy ...passed 00:06:10.206 Suite: bdevio tests on: Nvme1n1 00:06:10.206 Test: blockdev write read block ...passed 00:06:10.206 Test: blockdev write zeroes read block ...passed 00:06:10.206 Test: blockdev write zeroes read no split ...passed 00:06:10.206 Test: blockdev write zeroes read split ...passed 00:06:10.206 Test: blockdev write zeroes read split partial ...passed 00:06:10.206 Test: blockdev reset ...[2024-11-18 23:50:16.697570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:10.206 [2024-11-18 23:50:16.700169] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:10.206 Test: blockdev write read 8 blocks ...passed 00:06:10.206 Test: blockdev write read size > 128k ...uccessful. 00:06:10.206 passed 00:06:10.206 Test: blockdev write read invalid size ...passed 00:06:10.206 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:10.206 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:10.206 Test: blockdev write read max offset ...passed 00:06:10.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:10.206 Test: blockdev writev readv 8 blocks ...passed 00:06:10.207 Test: blockdev writev readv 30 x 1block ...passed 00:06:10.207 Test: blockdev writev readv block ...passed 00:06:10.207 Test: blockdev writev readv size > 128k ...passed 00:06:10.207 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:10.207 Test: blockdev comparev and writev ...[2024-11-18 23:50:16.706253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7a34000 len:0x1000 00:06:10.207 [2024-11-18 23:50:16.706291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:10.207 passed 00:06:10.207 Test: blockdev nvme passthru rw ...passed 00:06:10.207 Test: blockdev nvme passthru vendor specific ...passed 00:06:10.207 Test: blockdev nvme admin passthru ...[2024-11-18 23:50:16.706844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:10.207 [2024-11-18 23:50:16.706870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:10.207 passed 00:06:10.207 Test: blockdev copy ...passed 00:06:10.207 Suite: bdevio tests on: Nvme0n1 00:06:10.207 Test: blockdev write read block ...passed 00:06:10.207 Test: blockdev write zeroes read block ...passed 00:06:10.207 Test: blockdev write zeroes read no split ...passed 00:06:10.207 Test: blockdev write zeroes read split ...passed 00:06:10.207 Test: blockdev write zeroes read split partial ...passed 00:06:10.207 Test: blockdev reset ...[2024-11-18 23:50:16.749280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:10.207 [2024-11-18 23:50:16.751827] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:10.207 passed 00:06:10.207 Test: blockdev write read 8 blocks ...passed 00:06:10.207 Test: blockdev write read size > 128k ...passed 00:06:10.207 Test: blockdev write read invalid size ...passed 00:06:10.207 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:10.207 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:10.207 Test: blockdev write read max offset ...passed 00:06:10.207 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:10.207 Test: blockdev writev readv 8 blocks ...passed 00:06:10.207 Test: blockdev writev readv 30 x 1block ...passed 00:06:10.207 Test: blockdev writev readv block ...passed 00:06:10.207 Test: blockdev writev readv size > 128k ...passed 00:06:10.207 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:10.207 Test: blockdev comparev and writev ...[2024-11-18 23:50:16.758296] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:10.207 separate metadata which is not supported yet. 00:06:10.207 passed 00:06:10.207 Test: blockdev nvme passthru rw ...passed 00:06:10.207 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:50:16.758881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:10.207 [2024-11-18 23:50:16.759057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:06:10.207 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:06:10.207 passed 00:06:10.207 Test: blockdev copy ...passed 00:06:10.207 00:06:10.207 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.207 suites 6 6 n/a 0 0 00:06:10.207 tests 138 138 138 0 0 00:06:10.207 asserts 893 893 893 0 n/a 00:06:10.207 00:06:10.207 Elapsed time = 0.974 seconds 00:06:10.207 0 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59923 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59923 ']' 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59923 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59923 00:06:10.207 killing process with pid 59923 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59923' 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59923 00:06:10.207 23:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59923 00:06:10.774 23:50:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:10.774 00:06:10.774 real 0m2.099s 00:06:10.774 user 0m5.480s 00:06:10.774 sys 0m0.293s 00:06:10.774 23:50:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.774 23:50:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:10.774 ************************************ 00:06:10.774 END TEST bdev_bounds 00:06:10.774 ************************************ 00:06:11.034 23:50:17 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:11.034 23:50:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:11.034 23:50:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.034 23:50:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.034 ************************************ 00:06:11.034 START TEST bdev_nbd 00:06:11.034 ************************************ 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:11.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59977 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59977 /var/tmp/spdk-nbd.sock 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59977 ']' 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:11.034 23:50:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:11.034 [2024-11-18 23:50:17.544607] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:11.034 [2024-11-18 23:50:17.544869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.034 [2024-11-18 23:50:17.697258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.293 [2024-11-18 23:50:17.780705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.861 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.122 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.122 1+0 records in 00:06:12.122 1+0 records out 00:06:12.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039955 s, 10.3 MB/s 00:06:12.123 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.123 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.123 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.123 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.123 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.123 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.123 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.123 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:12.382 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:12.382 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:12.382 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:12.382 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.382 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.382 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.382 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.382 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.382 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.383 1+0 records in 00:06:12.383 1+0 records out 00:06:12.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444806 s, 9.2 MB/s 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.383 23:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:12.383 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:12.383 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:12.641 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:12.641 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:12.641 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.641 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.641 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.641 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.642 1+0 records in 00:06:12.642 1+0 records out 00:06:12.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461184 s, 8.9 MB/s 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.642 1+0 records in 00:06:12.642 1+0 records out 00:06:12.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550505 s, 7.4 MB/s 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.642 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.900 1+0 records in 00:06:12.900 1+0 records out 00:06:12.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543049 s, 7.5 MB/s 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.900 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.158 1+0 records in 00:06:13.158 1+0 records out 00:06:13.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393652 s, 10.4 MB/s 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:13.158 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.415 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd0", 00:06:13.415 "bdev_name": "Nvme0n1" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd1", 00:06:13.415 "bdev_name": "Nvme1n1" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd2", 00:06:13.415 "bdev_name": "Nvme2n1" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd3", 00:06:13.415 "bdev_name": "Nvme2n2" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd4", 00:06:13.415 "bdev_name": "Nvme2n3" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd5", 00:06:13.415 "bdev_name": "Nvme3n1" 00:06:13.415 } 00:06:13.415 ]' 00:06:13.415 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:13.415 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:13.415 23:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd0", 00:06:13.415 "bdev_name": "Nvme0n1" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd1", 00:06:13.415 "bdev_name": "Nvme1n1" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd2", 00:06:13.415 "bdev_name": "Nvme2n1" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd3", 00:06:13.415 "bdev_name": "Nvme2n2" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd4", 00:06:13.415 "bdev_name": "Nvme2n3" 00:06:13.415 }, 00:06:13.415 { 00:06:13.415 "nbd_device": "/dev/nbd5", 00:06:13.415 "bdev_name": "Nvme3n1" 00:06:13.415 } 00:06:13.415 ]' 00:06:13.415 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:13.415 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.415 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:13.415 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.415 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:13.415 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.416 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.674 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.933 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:14.191 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:14.191 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:14.192 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:14.192 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.192 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.192 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:14.192 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.192 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.192 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.192 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.450 23:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.708 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:14.966 /dev/nbd0 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.966 1+0 records in 00:06:14.966 1+0 records out 00:06:14.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405109 s, 10.1 MB/s 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.966 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.967 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.967 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.967 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.967 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:15.227 /dev/nbd1 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.227 1+0 records in 00:06:15.227 1+0 records out 00:06:15.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273445 s, 15.0 MB/s 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:15.227 23:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:15.489 /dev/nbd10 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.489 1+0 records in 00:06:15.489 1+0 records out 00:06:15.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034517 s, 11.9 MB/s 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:15.489 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:15.750 /dev/nbd11 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.750 1+0 records in 00:06:15.750 1+0 records out 00:06:15.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376758 s, 10.9 MB/s 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:15.750 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:16.011 /dev/nbd12 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.011 1+0 records in 00:06:16.011 1+0 records out 00:06:16.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554442 s, 7.4 MB/s 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:16.011 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:16.273 /dev/nbd13 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.273 1+0 records in 00:06:16.273 1+0 records out 00:06:16.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365039 s, 11.2 MB/s 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.273 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.535 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd0", 00:06:16.535 "bdev_name": "Nvme0n1" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd1", 00:06:16.535 "bdev_name": "Nvme1n1" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd10", 00:06:16.535 "bdev_name": "Nvme2n1" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd11", 00:06:16.535 "bdev_name": "Nvme2n2" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd12", 00:06:16.535 "bdev_name": "Nvme2n3" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd13", 00:06:16.535 "bdev_name": "Nvme3n1" 00:06:16.535 } 00:06:16.535 ]' 00:06:16.535 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd0", 00:06:16.535 "bdev_name": "Nvme0n1" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd1", 00:06:16.535 "bdev_name": "Nvme1n1" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd10", 00:06:16.535 "bdev_name": "Nvme2n1" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd11", 00:06:16.535 "bdev_name": "Nvme2n2" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd12", 00:06:16.535 "bdev_name": "Nvme2n3" 00:06:16.535 }, 00:06:16.535 { 00:06:16.535 "nbd_device": "/dev/nbd13", 00:06:16.535 "bdev_name": "Nvme3n1" 00:06:16.535 } 00:06:16.535 ]' 00:06:16.535 23:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.535 /dev/nbd1 00:06:16.535 /dev/nbd10 00:06:16.535 /dev/nbd11 00:06:16.535 /dev/nbd12 00:06:16.535 /dev/nbd13' 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.535 /dev/nbd1 00:06:16.535 /dev/nbd10 00:06:16.535 /dev/nbd11 00:06:16.535 /dev/nbd12 00:06:16.535 /dev/nbd13' 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:16.535 256+0 records in 00:06:16.535 256+0 records out 00:06:16.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00620371 s, 169 MB/s 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.535 256+0 records in 00:06:16.535 256+0 records out 00:06:16.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.051716 s, 20.3 MB/s 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.535 256+0 records in 00:06:16.535 256+0 records out 00:06:16.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0559234 s, 18.8 MB/s 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:16.535 256+0 records in 00:06:16.535 256+0 records out 00:06:16.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.053784 s, 19.5 MB/s 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.535 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:16.796 256+0 records in 00:06:16.796 256+0 records out 00:06:16.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.052371 s, 20.0 MB/s 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:16.796 256+0 records in 00:06:16.796 256+0 records out 00:06:16.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0606223 s, 17.3 MB/s 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:16.796 256+0 records in 00:06:16.796 256+0 records out 00:06:16.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0635256 s, 16.5 MB/s 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.796 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.055 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.314 23:50:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:17.575 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.836 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.097 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:18.358 23:50:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:18.618 malloc_lvol_verify 00:06:18.618 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:18.877 b016f59c-187e-4b58-ba86-a7e5846c553e 00:06:18.877 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:18.877 14abeb53-88ac-4ab8-96ec-aa83986b0367 00:06:18.877 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:19.138 /dev/nbd0 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:19.138 mke2fs 1.47.0 (5-Feb-2023) 00:06:19.138 Discarding device blocks: 0/4096 done 00:06:19.138 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:19.138 00:06:19.138 Allocating group tables: 0/1 done 00:06:19.138 Writing inode tables: 0/1 done 00:06:19.138 Creating journal (1024 blocks): done 00:06:19.138 Writing superblocks and filesystem accounting information: 0/1 done 00:06:19.138 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.138 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59977 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59977 ']' 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59977 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59977 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.398 killing process with pid 59977 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59977' 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59977 00:06:19.398 23:50:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59977 00:06:19.970 23:50:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:19.970 00:06:19.970 real 0m9.117s 00:06:19.970 user 0m13.349s 00:06:19.970 sys 0m2.773s 00:06:19.970 ************************************ 00:06:19.970 END TEST bdev_nbd 00:06:19.970 ************************************ 00:06:19.970 23:50:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.970 23:50:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:19.970 23:50:26 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:19.970 skipping fio tests on NVMe due to multi-ns failures. 00:06:19.970 23:50:26 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:19.970 23:50:26 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:19.970 23:50:26 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:19.970 23:50:26 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:19.970 23:50:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:19.970 23:50:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.970 23:50:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.970 ************************************ 00:06:19.970 START TEST bdev_verify 00:06:19.970 ************************************ 00:06:19.970 23:50:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:20.232 [2024-11-18 23:50:26.695442] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:20.232 [2024-11-18 23:50:26.695559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60344 ] 00:06:20.232 [2024-11-18 23:50:26.850396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.491 [2024-11-18 23:50:26.933507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.491 [2024-11-18 23:50:26.933674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.058 Running I/O for 5 seconds... 00:06:23.369 24960.00 IOPS, 97.50 MiB/s [2024-11-18T23:50:31.006Z] 24448.00 IOPS, 95.50 MiB/s [2024-11-18T23:50:31.952Z] 23808.00 IOPS, 93.00 MiB/s [2024-11-18T23:50:32.651Z] 23184.00 IOPS, 90.56 MiB/s [2024-11-18T23:50:32.651Z] 22860.80 IOPS, 89.30 MiB/s 00:06:25.959 Latency(us) 00:06:25.959 [2024-11-18T23:50:32.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.959 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.959 Verification LBA range: start 0x0 length 0xbd0bd 00:06:25.959 Nvme0n1 : 5.06 1973.18 7.71 0.00 0.00 64737.99 11040.30 53235.40 00:06:25.959 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.959 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:25.959 Nvme0n1 : 5.06 1796.91 7.02 0.00 0.00 71006.01 13006.38 59284.87 00:06:25.959 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.959 Verification LBA range: start 0x0 length 0xa0000 00:06:25.959 Nvme1n1 : 5.06 1971.34 7.70 0.00 0.00 64700.86 13409.67 51622.20 00:06:25.959 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.959 Verification LBA range: start 0xa0000 length 0xa0000 00:06:25.959 Nvme1n1 : 5.06 1796.31 7.02 0.00 0.00 70890.99 14922.04 56461.78 00:06:25.959 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.959 Verification LBA range: start 0x0 length 0x80000 00:06:25.959 Nvme2n1 : 5.07 1970.57 7.70 0.00 0.00 64628.42 14216.27 49605.71 00:06:25.959 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.959 Verification LBA range: start 0x80000 length 0x80000 00:06:25.959 Nvme2n1 : 5.06 1795.17 7.01 0.00 0.00 70785.24 16333.59 55655.19 00:06:25.959 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.960 Verification LBA range: start 0x0 length 0x80000 00:06:25.960 Nvme2n2 : 5.07 1970.02 7.70 0.00 0.00 64549.74 15224.52 51420.55 00:06:25.960 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.960 Verification LBA range: start 0x80000 length 0x80000 00:06:25.960 Nvme2n2 : 5.06 1794.69 7.01 0.00 0.00 70651.40 14518.74 54445.29 00:06:25.960 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.960 Verification LBA range: start 0x0 length 0x80000 00:06:25.960 Nvme2n3 : 5.07 1969.41 7.69 0.00 0.00 64478.67 12754.31 52428.80 00:06:25.960 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.960 Verification LBA range: start 0x80000 length 0x80000 00:06:25.960 Nvme2n3 : 5.07 1804.01 7.05 0.00 0.00 70252.21 3528.86 57671.68 00:06:25.960 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.960 Verification LBA range: start 0x0 length 0x20000 00:06:25.960 Nvme3n1 : 5.07 1968.85 7.69 0.00 0.00 64400.22 6301.54 53638.70 00:06:25.960 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.960 Verification LBA range: start 0x20000 length 0x20000 00:06:25.960 Nvme3n1 : 5.07 1803.50 7.04 0.00 0.00 70201.33 3780.92 59284.87 00:06:25.960 [2024-11-18T23:50:32.652Z] =================================================================================================================== 00:06:25.960 [2024-11-18T23:50:32.652Z] Total : 22613.96 88.34 0.00 0.00 67467.93 3528.86 59284.87 00:06:27.347 00:06:27.347 real 0m7.260s 00:06:27.347 user 0m13.554s 00:06:27.347 sys 0m0.226s 00:06:27.347 23:50:33 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.347 ************************************ 00:06:27.347 END TEST bdev_verify 00:06:27.347 ************************************ 00:06:27.347 23:50:33 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:27.347 23:50:33 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:27.347 23:50:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:27.347 23:50:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.347 23:50:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.347 ************************************ 00:06:27.347 START TEST bdev_verify_big_io 00:06:27.347 ************************************ 00:06:27.347 23:50:33 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:27.609 [2024-11-18 23:50:34.040024] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:27.609 [2024-11-18 23:50:34.040183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60442 ] 00:06:27.609 [2024-11-18 23:50:34.204509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.870 [2024-11-18 23:50:34.366250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.870 [2024-11-18 23:50:34.366307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.813 Running I/O for 5 seconds... 00:06:33.904 594.00 IOPS, 37.12 MiB/s [2024-11-18T23:50:41.530Z] 2058.00 IOPS, 128.62 MiB/s [2024-11-18T23:50:41.790Z] 2624.33 IOPS, 164.02 MiB/s 00:06:35.098 Latency(us) 00:06:35.098 [2024-11-18T23:50:41.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.098 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x0 length 0xbd0b 00:06:35.098 Nvme0n1 : 5.71 130.55 8.16 0.00 0.00 939989.11 27827.59 1019538.51 00:06:35.098 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:35.098 Nvme0n1 : 5.94 82.30 5.14 0.00 0.00 1492057.50 18450.90 1613193.85 00:06:35.098 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x0 length 0xa000 00:06:35.098 Nvme1n1 : 5.71 134.53 8.41 0.00 0.00 895728.25 93565.24 845313.58 00:06:35.098 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0xa000 length 0xa000 00:06:35.098 Nvme1n1 : 5.94 82.43 5.15 0.00 0.00 1409310.01 52428.80 1458327.24 00:06:35.098 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x0 length 0x8000 00:06:35.098 Nvme2n1 : 5.71 134.49 8.41 0.00 0.00 867903.02 94371.84 871124.68 00:06:35.098 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x8000 length 0x8000 00:06:35.098 Nvme2n1 : 5.95 86.11 5.38 0.00 0.00 1281774.87 36700.16 1464780.01 00:06:35.098 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x0 length 0x8000 00:06:35.098 Nvme2n2 : 5.79 136.88 8.56 0.00 0.00 824963.79 80256.39 884030.23 00:06:35.098 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x8000 length 0x8000 00:06:35.098 Nvme2n2 : 6.05 102.13 6.38 0.00 0.00 1032320.23 18047.61 1303460.63 00:06:35.098 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x0 length 0x8000 00:06:35.098 Nvme2n3 : 5.90 147.88 9.24 0.00 0.00 744300.72 29037.49 916294.10 00:06:35.098 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x8000 length 0x8000 00:06:35.098 Nvme2n3 : 6.14 128.35 8.02 0.00 0.00 794618.43 13510.50 1477685.56 00:06:35.098 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x0 length 0x2000 00:06:35.098 Nvme3n1 : 5.90 162.65 10.17 0.00 0.00 661493.43 995.64 942105.21 00:06:35.098 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.098 Verification LBA range: start 0x2000 length 0x2000 00:06:35.098 Nvme3n1 : 6.38 250.87 15.68 0.00 0.00 390781.53 94.52 1477685.56 00:06:35.098 [2024-11-18T23:50:41.790Z] =================================================================================================================== 00:06:35.098 [2024-11-18T23:50:41.790Z] Total : 1579.18 98.70 0.00 0.00 846416.18 94.52 1613193.85 00:06:37.077 00:06:37.077 real 0m9.428s 00:06:37.077 user 0m17.680s 00:06:37.077 sys 0m0.379s 00:06:37.077 23:50:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.077 ************************************ 00:06:37.077 END TEST bdev_verify_big_io 00:06:37.077 ************************************ 00:06:37.077 23:50:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:37.077 23:50:43 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.077 23:50:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:37.077 23:50:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.077 23:50:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.077 ************************************ 00:06:37.077 START TEST bdev_write_zeroes 00:06:37.077 ************************************ 00:06:37.077 23:50:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.077 [2024-11-18 23:50:43.514938] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:37.077 [2024-11-18 23:50:43.515050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60558 ] 00:06:37.077 [2024-11-18 23:50:43.670191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.337 [2024-11-18 23:50:43.769089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.905 Running I/O for 1 seconds... 00:06:38.846 64896.00 IOPS, 253.50 MiB/s 00:06:38.846 Latency(us) 00:06:38.846 [2024-11-18T23:50:45.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.846 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.846 Nvme0n1 : 1.02 10775.18 42.09 0.00 0.00 11854.38 5419.32 23592.96 00:06:38.846 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.846 Nvme1n1 : 1.02 10762.52 42.04 0.00 0.00 11852.47 9225.45 22685.54 00:06:38.846 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.846 Nvme2n1 : 1.02 10750.30 41.99 0.00 0.00 11797.24 9175.04 21273.99 00:06:38.846 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.846 Nvme2n2 : 1.03 10738.08 41.95 0.00 0.00 11770.48 8670.92 20568.22 00:06:38.846 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.846 Nvme2n3 : 1.03 10725.93 41.90 0.00 0.00 11750.40 7713.08 20971.52 00:06:38.846 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.846 Nvme3n1 : 1.03 10713.82 41.85 0.00 0.00 11735.61 6074.68 20669.05 00:06:38.846 [2024-11-18T23:50:45.538Z] =================================================================================================================== 00:06:38.846 [2024-11-18T23:50:45.538Z] Total : 64465.83 251.82 0.00 0.00 11793.43 5419.32 23592.96 00:06:39.790 00:06:39.790 real 0m2.797s 00:06:39.790 user 0m2.466s 00:06:39.790 sys 0m0.212s 00:06:39.790 23:50:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.790 23:50:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:39.790 ************************************ 00:06:39.790 END TEST bdev_write_zeroes 00:06:39.790 ************************************ 00:06:39.790 23:50:46 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.790 23:50:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:39.790 23:50:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.790 23:50:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.790 ************************************ 00:06:39.790 START TEST bdev_json_nonenclosed 00:06:39.790 ************************************ 00:06:39.790 23:50:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.790 [2024-11-18 23:50:46.400625] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:39.790 [2024-11-18 23:50:46.400765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60611 ] 00:06:40.051 [2024-11-18 23:50:46.564358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.051 [2024-11-18 23:50:46.712154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.051 [2024-11-18 23:50:46.712284] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:40.051 [2024-11-18 23:50:46.712306] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:40.051 [2024-11-18 23:50:46.712318] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.312 00:06:40.312 real 0m0.611s 00:06:40.312 user 0m0.381s 00:06:40.312 sys 0m0.123s 00:06:40.312 ************************************ 00:06:40.312 END TEST bdev_json_nonenclosed 00:06:40.312 ************************************ 00:06:40.312 23:50:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.312 23:50:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:40.312 23:50:46 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.312 23:50:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:40.312 23:50:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.312 23:50:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.312 ************************************ 00:06:40.312 START TEST bdev_json_nonarray 00:06:40.312 ************************************ 00:06:40.312 23:50:46 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.573 [2024-11-18 23:50:47.075369] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:40.573 [2024-11-18 23:50:47.075498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60642 ] 00:06:40.573 [2024-11-18 23:50:47.239923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.832 [2024-11-18 23:50:47.402560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.832 [2024-11-18 23:50:47.402705] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:40.832 [2024-11-18 23:50:47.402728] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:40.832 [2024-11-18 23:50:47.402740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.093 00:06:41.093 real 0m0.605s 00:06:41.093 user 0m0.371s 00:06:41.093 sys 0m0.128s 00:06:41.093 23:50:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.093 ************************************ 00:06:41.093 END TEST bdev_json_nonarray 00:06:41.093 ************************************ 00:06:41.093 23:50:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:41.093 23:50:47 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:41.093 00:06:41.093 real 0m36.863s 00:06:41.093 user 0m57.561s 00:06:41.093 sys 0m5.095s 00:06:41.093 ************************************ 00:06:41.093 END TEST blockdev_nvme 00:06:41.093 ************************************ 00:06:41.093 23:50:47 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.093 23:50:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.093 23:50:47 -- spdk/autotest.sh@209 -- # uname -s 00:06:41.093 23:50:47 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:41.093 23:50:47 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:41.093 23:50:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.093 23:50:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.093 23:50:47 -- common/autotest_common.sh@10 -- # set +x 00:06:41.093 ************************************ 00:06:41.093 START TEST blockdev_nvme_gpt 00:06:41.093 ************************************ 00:06:41.093 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:41.355 * Looking for test storage... 00:06:41.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:41.355 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.355 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.355 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.355 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.355 23:50:47 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:41.355 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.355 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.355 --rc genhtml_branch_coverage=1 00:06:41.355 --rc genhtml_function_coverage=1 00:06:41.355 --rc genhtml_legend=1 00:06:41.355 --rc geninfo_all_blocks=1 00:06:41.355 --rc geninfo_unexecuted_blocks=1 00:06:41.355 00:06:41.355 ' 00:06:41.355 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.355 --rc genhtml_branch_coverage=1 00:06:41.355 --rc genhtml_function_coverage=1 00:06:41.355 --rc genhtml_legend=1 00:06:41.355 --rc geninfo_all_blocks=1 00:06:41.355 --rc geninfo_unexecuted_blocks=1 00:06:41.355 00:06:41.355 ' 00:06:41.355 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.355 --rc genhtml_branch_coverage=1 00:06:41.355 --rc genhtml_function_coverage=1 00:06:41.355 --rc genhtml_legend=1 00:06:41.356 --rc geninfo_all_blocks=1 00:06:41.356 --rc geninfo_unexecuted_blocks=1 00:06:41.356 00:06:41.356 ' 00:06:41.356 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.356 --rc genhtml_branch_coverage=1 00:06:41.356 --rc genhtml_function_coverage=1 00:06:41.356 --rc genhtml_legend=1 00:06:41.356 --rc geninfo_all_blocks=1 00:06:41.356 --rc geninfo_unexecuted_blocks=1 00:06:41.356 00:06:41.356 ' 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:41.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60720 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60720 00:06:41.356 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60720 ']' 00:06:41.356 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.356 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.356 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.356 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.356 23:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:41.356 23:50:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.356 [2024-11-18 23:50:47.988903] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:41.356 [2024-11-18 23:50:47.989488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60720 ] 00:06:41.618 [2024-11-18 23:50:48.156449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.880 [2024-11-18 23:50:48.318958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.453 23:50:49 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.453 23:50:49 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:42.453 23:50:49 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:42.453 23:50:49 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:42.453 23:50:49 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:43.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.025 Waiting for block devices as requested 00:06:43.025 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.285 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.285 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.285 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.561 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:48.561 23:50:54 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:48.561 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:48.561 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:48.561 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:48.561 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.561 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:48.561 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.562 23:50:54 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:48.562 BYT; 00:06:48.562 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:48.562 BYT; 00:06:48.562 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:48.562 23:50:55 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:48.562 23:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:49.497 The operation has completed successfully. 00:06:49.497 23:50:56 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:50.431 The operation has completed successfully. 00:06:50.431 23:50:57 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:50.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:51.255 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.255 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.514 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.514 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.514 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:51.514 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.514 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.514 [] 00:06:51.514 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.514 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:51.514 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:51.514 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:51.514 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:51.514 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:51.514 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.514 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.772 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.772 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:51.772 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.772 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.772 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.772 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:51.772 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:51.772 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.772 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.031 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.031 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:52.031 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:52.032 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f20af094-a158-4c89-bb0f-504bc55d90db"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f20af094-a158-4c89-bb0f-504bc55d90db",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bb722b53-681b-4de8-90f5-738e403489f4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb722b53-681b-4de8-90f5-738e403489f4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d72c7067-79da-4c41-ad74-5527f7ca3444"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d72c7067-79da-4c41-ad74-5527f7ca3444",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "672745cc-d804-480f-a669-03b44db1cdb8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "672745cc-d804-480f-a669-03b44db1cdb8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cb9218b9-853a-457c-b8f8-cc9421133909"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cb9218b9-853a-457c-b8f8-cc9421133909",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:52.032 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:52.032 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:52.032 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:52.032 23:50:58 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60720 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60720 ']' 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60720 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60720 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.032 killing process with pid 60720 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60720' 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60720 00:06:52.032 23:50:58 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60720 00:06:53.415 23:50:59 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:53.415 23:50:59 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:53.415 23:50:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:53.415 23:50:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.415 23:50:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.415 ************************************ 00:06:53.415 START TEST bdev_hello_world 00:06:53.415 ************************************ 00:06:53.415 23:50:59 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:53.415 [2024-11-18 23:50:59.862288] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:53.415 [2024-11-18 23:50:59.862403] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61342 ] 00:06:53.415 [2024-11-18 23:51:00.023282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.672 [2024-11-18 23:51:00.139015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.237 [2024-11-18 23:51:00.700722] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:54.237 [2024-11-18 23:51:00.700770] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:54.237 [2024-11-18 23:51:00.700791] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:54.238 [2024-11-18 23:51:00.703388] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:54.238 [2024-11-18 23:51:00.704051] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:54.238 [2024-11-18 23:51:00.704078] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:54.238 [2024-11-18 23:51:00.704236] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:54.238 00:06:54.238 [2024-11-18 23:51:00.704260] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:54.811 00:06:54.811 real 0m1.652s 00:06:54.811 user 0m1.349s 00:06:54.811 sys 0m0.195s 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:54.811 ************************************ 00:06:54.811 END TEST bdev_hello_world 00:06:54.811 ************************************ 00:06:54.811 23:51:01 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:54.811 23:51:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.811 23:51:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.811 23:51:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.811 ************************************ 00:06:54.811 START TEST bdev_bounds 00:06:54.811 ************************************ 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61379 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.811 Process bdevio pid: 61379 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61379' 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61379 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61379 ']' 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:54.811 23:51:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:55.088 [2024-11-18 23:51:01.564165] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:55.088 [2024-11-18 23:51:01.564299] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61379 ] 00:06:55.088 [2024-11-18 23:51:01.726692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.347 [2024-11-18 23:51:01.846076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.347 [2024-11-18 23:51:01.846176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.347 [2024-11-18 23:51:01.846189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.913 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.913 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:55.913 23:51:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:55.913 I/O targets: 00:06:55.913 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:55.913 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:55.913 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:55.913 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:55.913 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:55.913 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:55.913 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:55.913 00:06:55.913 00:06:55.913 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.913 http://cunit.sourceforge.net/ 00:06:55.913 00:06:55.913 00:06:55.913 Suite: bdevio tests on: Nvme3n1 00:06:55.913 Test: blockdev write read block ...passed 00:06:55.913 Test: blockdev write zeroes read block ...passed 00:06:55.913 Test: blockdev write zeroes read no split ...passed 00:06:55.913 Test: blockdev write zeroes read split ...passed 00:06:55.913 Test: blockdev write zeroes read split partial ...passed 00:06:55.913 Test: blockdev reset ...[2024-11-18 23:51:02.564629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:55.913 [2024-11-18 23:51:02.567724] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:55.913 passed 00:06:55.913 Test: blockdev write read 8 blocks ...passed 00:06:55.913 Test: blockdev write read size > 128k ...passed 00:06:55.913 Test: blockdev write read invalid size ...passed 00:06:55.913 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.913 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.913 Test: blockdev write read max offset ...passed 00:06:55.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.913 Test: blockdev writev readv 8 blocks ...passed 00:06:55.913 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.913 Test: blockdev writev readv block ...passed 00:06:55.913 Test: blockdev writev readv size > 128k ...passed 00:06:55.913 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.913 Test: blockdev comparev and writev ...[2024-11-18 23:51:02.574444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:55.913 Test: blockdev nvme passthru rw ...passed 00:06:55.913 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2b5c04000 len:0x1000 00:06:55.913 [2024-11-18 23:51:02.574584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.913 [2024-11-18 23:51:02.575151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:55.913 [2024-11-18 23:51:02.575176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:55.913 passed 00:06:55.913 Test: blockdev nvme admin passthru ...passed 00:06:55.913 Test: blockdev copy ...passed 00:06:55.913 Suite: bdevio tests on: Nvme2n3 00:06:55.913 Test: blockdev write read block ...passed 00:06:55.913 Test: blockdev write zeroes read block ...passed 00:06:55.913 Test: blockdev write zeroes read no split ...passed 00:06:56.171 Test: blockdev write zeroes read split ...passed 00:06:56.171 Test: blockdev write zeroes read split partial ...passed 00:06:56.171 Test: blockdev reset ...[2024-11-18 23:51:02.630228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:56.171 [2024-11-18 23:51:02.633697] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:56.171 passed 00:06:56.171 Test: blockdev write read 8 blocks ...passed 00:06:56.171 Test: blockdev write read size > 128k ...passed 00:06:56.171 Test: blockdev write read invalid size ...passed 00:06:56.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.171 Test: blockdev write read max offset ...passed 00:06:56.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.171 Test: blockdev writev readv 8 blocks ...passed 00:06:56.171 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.171 Test: blockdev writev readv block ...passed 00:06:56.171 Test: blockdev writev readv size > 128k ...passed 00:06:56.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.171 Test: blockdev comparev and writev ...[2024-11-18 23:51:02.640950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5c02000 len:0x1000 00:06:56.171 [2024-11-18 23:51:02.640997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:56.171 passed 00:06:56.171 Test: blockdev nvme passthru rw ...passed 00:06:56.171 Test: blockdev nvme passthru vendor specific ...passed 00:06:56.171 Test: blockdev nvme admin passthru ...[2024-11-18 23:51:02.641604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:56.171 [2024-11-18 23:51:02.641628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:56.171 passed 00:06:56.171 Test: blockdev copy ...passed 00:06:56.171 Suite: bdevio tests on: Nvme2n2 00:06:56.171 Test: blockdev write read block ...passed 00:06:56.171 Test: blockdev write zeroes read block ...passed 00:06:56.171 Test: blockdev write zeroes read no split ...passed 00:06:56.171 Test: blockdev write zeroes read split ...passed 00:06:56.171 Test: blockdev write zeroes read split partial ...passed 00:06:56.171 Test: blockdev reset ...[2024-11-18 23:51:02.694360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:56.171 [2024-11-18 23:51:02.697565] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:56.171 passed 00:06:56.171 Test: blockdev write read 8 blocks ...passed 00:06:56.171 Test: blockdev write read size > 128k ...passed 00:06:56.171 Test: blockdev write read invalid size ...passed 00:06:56.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.171 Test: blockdev write read max offset ...passed 00:06:56.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.171 Test: blockdev writev readv 8 blocks ...passed 00:06:56.171 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.171 Test: blockdev writev readv block ...passed 00:06:56.171 Test: blockdev writev readv size > 128k ...passed 00:06:56.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.171 Test: blockdev comparev and writev ...[2024-11-18 23:51:02.703706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cac38000 len:0x1000 00:06:56.171 [2024-11-18 23:51:02.703752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:56.171 passed 00:06:56.171 Test: blockdev nvme passthru rw ...passed 00:06:56.171 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:51:02.704282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:56.171 [2024-11-18 23:51:02.704303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:56.171 passed 00:06:56.171 Test: blockdev nvme admin passthru ...passed 00:06:56.171 Test: blockdev copy ...passed 00:06:56.171 Suite: bdevio tests on: Nvme2n1 00:06:56.171 Test: blockdev write read block ...passed 00:06:56.171 Test: blockdev write zeroes read block ...passed 00:06:56.171 Test: blockdev write zeroes read no split ...passed 00:06:56.171 Test: blockdev write zeroes read split ...passed 00:06:56.171 Test: blockdev write zeroes read split partial ...passed 00:06:56.171 Test: blockdev reset ...[2024-11-18 23:51:02.745190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:56.171 [2024-11-18 23:51:02.748298] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:56.171 passed 00:06:56.171 Test: blockdev write read 8 blocks ...passed 00:06:56.171 Test: blockdev write read size > 128k ...passed 00:06:56.171 Test: blockdev write read invalid size ...passed 00:06:56.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.171 Test: blockdev write read max offset ...passed 00:06:56.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.171 Test: blockdev writev readv 8 blocks ...passed 00:06:56.171 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.171 Test: blockdev writev readv block ...passed 00:06:56.171 Test: blockdev writev readv size > 128k ...passed 00:06:56.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.171 Test: blockdev comparev and writev ...[2024-11-18 23:51:02.754037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cac34000 len:0x1000 00:06:56.172 [2024-11-18 23:51:02.754078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:56.172 passed 00:06:56.172 Test: blockdev nvme passthru rw ...passed 00:06:56.172 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:51:02.754679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:56.172 [2024-11-18 23:51:02.754703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:56.172 passed 00:06:56.172 Test: blockdev nvme admin passthru ...passed 00:06:56.172 Test: blockdev copy ...passed 00:06:56.172 Suite: bdevio tests on: Nvme1n1p2 00:06:56.172 Test: blockdev write read block ...passed 00:06:56.172 Test: blockdev write zeroes read block ...passed 00:06:56.172 Test: blockdev write zeroes read no split ...passed 00:06:56.172 Test: blockdev write zeroes read split ...passed 00:06:56.172 Test: blockdev write zeroes read split partial ...passed 00:06:56.172 Test: blockdev reset ...[2024-11-18 23:51:02.798393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:56.172 [2024-11-18 23:51:02.801193] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:56.172 passed 00:06:56.172 Test: blockdev write read 8 blocks ...passed 00:06:56.172 Test: blockdev write read size > 128k ...passed 00:06:56.172 Test: blockdev write read invalid size ...passed 00:06:56.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.172 Test: blockdev write read max offset ...passed 00:06:56.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.172 Test: blockdev writev readv 8 blocks ...passed 00:06:56.172 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.172 Test: blockdev writev readv block ...passed 00:06:56.172 Test: blockdev writev readv size > 128k ...passed 00:06:56.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.172 Test: blockdev comparev and writev ...[2024-11-18 23:51:02.808151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cac30000 len:0x1000 00:06:56.172 [2024-11-18 23:51:02.808192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:56.172 passed 00:06:56.172 Test: blockdev nvme passthru rw ...passed 00:06:56.172 Test: blockdev nvme passthru vendor specific ...passed 00:06:56.172 Test: blockdev nvme admin passthru ...passed 00:06:56.172 Test: blockdev copy ...passed 00:06:56.172 Suite: bdevio tests on: Nvme1n1p1 00:06:56.172 Test: blockdev write read block ...passed 00:06:56.172 Test: blockdev write zeroes read block ...passed 00:06:56.172 Test: blockdev write zeroes read no split ...passed 00:06:56.172 Test: blockdev write zeroes read split ...passed 00:06:56.172 Test: blockdev write zeroes read split partial ...passed 00:06:56.172 Test: blockdev reset ...[2024-11-18 23:51:02.850014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:56.172 [2024-11-18 23:51:02.852706] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:56.172 passed 00:06:56.172 Test: blockdev write read 8 blocks ...passed 00:06:56.172 Test: blockdev write read size > 128k ...passed 00:06:56.172 Test: blockdev write read invalid size ...passed 00:06:56.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.172 Test: blockdev write read max offset ...passed 00:06:56.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.172 Test: blockdev writev readv 8 blocks ...passed 00:06:56.172 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.172 Test: blockdev writev readv block ...passed 00:06:56.172 Test: blockdev writev readv size > 128k ...passed 00:06:56.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.172 Test: blockdev comparev and writev ...[2024-11-18 23:51:02.858347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b660e000 len:0x1000 00:06:56.172 [2024-11-18 23:51:02.858399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:56.172 passed 00:06:56.172 Test: blockdev nvme passthru rw ...passed 00:06:56.172 Test: blockdev nvme passthru vendor specific ...passed 00:06:56.172 Test: blockdev nvme admin passthru ...passed 00:06:56.172 Test: blockdev copy ...passed 00:06:56.172 Suite: bdevio tests on: Nvme0n1 00:06:56.172 Test: blockdev write read block ...passed 00:06:56.172 Test: blockdev write zeroes read block ...passed 00:06:56.430 Test: blockdev write zeroes read no split ...passed 00:06:56.430 Test: blockdev write zeroes read split ...passed 00:06:56.430 Test: blockdev write zeroes read split partial ...passed 00:06:56.430 Test: blockdev reset ...[2024-11-18 23:51:02.906089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:56.430 [2024-11-18 23:51:02.908834] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:56.430 passed 00:06:56.430 Test: blockdev write read 8 blocks ...passed 00:06:56.430 Test: blockdev write read size > 128k ...passed 00:06:56.430 Test: blockdev write read invalid size ...passed 00:06:56.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.430 Test: blockdev write read max offset ...passed 00:06:56.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.430 Test: blockdev writev readv 8 blocks ...passed 00:06:56.430 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.430 Test: blockdev writev readv block ...passed 00:06:56.430 Test: blockdev writev readv size > 128k ...passed 00:06:56.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.430 Test: blockdev comparev and writev ...[2024-11-18 23:51:02.913705] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:56.430 separate metadata which is not supported yet. 00:06:56.430 passed 00:06:56.430 Test: blockdev nvme passthru rw ...passed 00:06:56.430 Test: blockdev nvme passthru vendor specific ...passed 00:06:56.430 Test: blockdev nvme admin passthru ...[2024-11-18 23:51:02.914111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:56.430 [2024-11-18 23:51:02.914154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:56.430 passed 00:06:56.430 Test: blockdev copy ...passed 00:06:56.430 00:06:56.430 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.430 suites 7 7 n/a 0 0 00:06:56.430 tests 161 161 161 0 0 00:06:56.430 asserts 1025 1025 1025 0 n/a 00:06:56.430 00:06:56.430 Elapsed time = 1.066 seconds 00:06:56.430 0 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61379 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61379 ']' 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61379 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61379 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.430 killing process with pid 61379 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61379' 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61379 00:06:56.430 23:51:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61379 00:06:57.002 23:51:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:57.002 00:06:57.002 real 0m2.166s 00:06:57.002 user 0m5.444s 00:06:57.002 sys 0m0.309s 00:06:57.002 23:51:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.002 23:51:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:57.002 ************************************ 00:06:57.002 END TEST bdev_bounds 00:06:57.002 ************************************ 00:06:57.002 23:51:03 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:57.263 23:51:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:57.263 23:51:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.263 23:51:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.263 ************************************ 00:06:57.263 START TEST bdev_nbd 00:06:57.263 ************************************ 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61438 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61438 /var/tmp/spdk-nbd.sock 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61438 ']' 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:57.263 23:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:57.263 [2024-11-18 23:51:03.764992] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:57.263 [2024-11-18 23:51:03.765109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.263 [2024-11-18 23:51:03.918833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.524 [2024-11-18 23:51:04.033146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.093 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.354 1+0 records in 00:06:58.354 1+0 records out 00:06:58.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386551 s, 10.6 MB/s 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.354 23:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.615 1+0 records in 00:06:58.615 1+0 records out 00:06:58.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405121 s, 10.1 MB/s 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.615 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:58.875 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.876 1+0 records in 00:06:58.876 1+0 records out 00:06:58.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422418 s, 9.7 MB/s 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.876 1+0 records in 00:06:58.876 1+0 records out 00:06:58.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370283 s, 11.1 MB/s 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.876 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.136 1+0 records in 00:06:59.136 1+0 records out 00:06:59.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339462 s, 12.1 MB/s 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.136 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:59.397 23:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.397 1+0 records in 00:06:59.397 1+0 records out 00:06:59.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441698 s, 9.3 MB/s 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:59.397 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.660 1+0 records in 00:06:59.660 1+0 records out 00:06:59.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375934 s, 10.9 MB/s 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:59.660 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd0", 00:06:59.921 "bdev_name": "Nvme0n1" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd1", 00:06:59.921 "bdev_name": "Nvme1n1p1" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd2", 00:06:59.921 "bdev_name": "Nvme1n1p2" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd3", 00:06:59.921 "bdev_name": "Nvme2n1" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd4", 00:06:59.921 "bdev_name": "Nvme2n2" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd5", 00:06:59.921 "bdev_name": "Nvme2n3" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd6", 00:06:59.921 "bdev_name": "Nvme3n1" 00:06:59.921 } 00:06:59.921 ]' 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd0", 00:06:59.921 "bdev_name": "Nvme0n1" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd1", 00:06:59.921 "bdev_name": "Nvme1n1p1" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd2", 00:06:59.921 "bdev_name": "Nvme1n1p2" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd3", 00:06:59.921 "bdev_name": "Nvme2n1" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd4", 00:06:59.921 "bdev_name": "Nvme2n2" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd5", 00:06:59.921 "bdev_name": "Nvme2n3" 00:06:59.921 }, 00:06:59.921 { 00:06:59.921 "nbd_device": "/dev/nbd6", 00:06:59.921 "bdev_name": "Nvme3n1" 00:06:59.921 } 00:06:59.921 ]' 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.921 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.182 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.442 23:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.703 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.964 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.225 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:01.485 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:01.485 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:01.485 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:01.485 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.485 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.485 23:51:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:01.485 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.485 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.485 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.485 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.485 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.748 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:02.009 /dev/nbd0 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.009 1+0 records in 00:07:02.009 1+0 records out 00:07:02.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493298 s, 8.3 MB/s 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:02.009 /dev/nbd1 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.009 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.270 1+0 records in 00:07:02.270 1+0 records out 00:07:02.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502929 s, 8.1 MB/s 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:02.270 /dev/nbd10 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.270 1+0 records in 00:07:02.270 1+0 records out 00:07:02.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640466 s, 6.4 MB/s 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.270 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:02.271 23:51:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:02.531 /dev/nbd11 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.532 1+0 records in 00:07:02.532 1+0 records out 00:07:02.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494962 s, 8.3 MB/s 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:02.532 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:02.792 /dev/nbd12 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.792 1+0 records in 00:07:02.792 1+0 records out 00:07:02.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337782 s, 12.1 MB/s 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:02.792 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:03.052 /dev/nbd13 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.052 1+0 records in 00:07:03.052 1+0 records out 00:07:03.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439566 s, 9.3 MB/s 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:03.052 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:03.313 /dev/nbd14 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.313 1+0 records in 00:07:03.313 1+0 records out 00:07:03.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641801 s, 6.4 MB/s 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.313 23:51:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd0", 00:07:03.574 "bdev_name": "Nvme0n1" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd1", 00:07:03.574 "bdev_name": "Nvme1n1p1" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd10", 00:07:03.574 "bdev_name": "Nvme1n1p2" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd11", 00:07:03.574 "bdev_name": "Nvme2n1" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd12", 00:07:03.574 "bdev_name": "Nvme2n2" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd13", 00:07:03.574 "bdev_name": "Nvme2n3" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd14", 00:07:03.574 "bdev_name": "Nvme3n1" 00:07:03.574 } 00:07:03.574 ]' 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd0", 00:07:03.574 "bdev_name": "Nvme0n1" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd1", 00:07:03.574 "bdev_name": "Nvme1n1p1" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd10", 00:07:03.574 "bdev_name": "Nvme1n1p2" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd11", 00:07:03.574 "bdev_name": "Nvme2n1" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd12", 00:07:03.574 "bdev_name": "Nvme2n2" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd13", 00:07:03.574 "bdev_name": "Nvme2n3" 00:07:03.574 }, 00:07:03.574 { 00:07:03.574 "nbd_device": "/dev/nbd14", 00:07:03.574 "bdev_name": "Nvme3n1" 00:07:03.574 } 00:07:03.574 ]' 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:03.574 /dev/nbd1 00:07:03.574 /dev/nbd10 00:07:03.574 /dev/nbd11 00:07:03.574 /dev/nbd12 00:07:03.574 /dev/nbd13 00:07:03.574 /dev/nbd14' 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:03.574 /dev/nbd1 00:07:03.574 /dev/nbd10 00:07:03.574 /dev/nbd11 00:07:03.574 /dev/nbd12 00:07:03.574 /dev/nbd13 00:07:03.574 /dev/nbd14' 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:03.574 256+0 records in 00:07:03.574 256+0 records out 00:07:03.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00729889 s, 144 MB/s 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.574 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.574 256+0 records in 00:07:03.575 256+0 records out 00:07:03.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0754086 s, 13.9 MB/s 00:07:03.575 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.575 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.836 256+0 records in 00:07:03.836 256+0 records out 00:07:03.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.078204 s, 13.4 MB/s 00:07:03.836 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.836 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:03.836 256+0 records in 00:07:03.836 256+0 records out 00:07:03.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0749627 s, 14.0 MB/s 00:07:03.836 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.836 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:03.836 256+0 records in 00:07:03.836 256+0 records out 00:07:03.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0733189 s, 14.3 MB/s 00:07:03.836 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.836 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:04.097 256+0 records in 00:07:04.097 256+0 records out 00:07:04.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0743252 s, 14.1 MB/s 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:04.097 256+0 records in 00:07:04.097 256+0 records out 00:07:04.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0741569 s, 14.1 MB/s 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:04.097 256+0 records in 00:07:04.097 256+0 records out 00:07:04.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0773042 s, 13.6 MB/s 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.097 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.359 23:51:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.618 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.884 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.164 23:51:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.436 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.694 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:05.952 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:06.211 malloc_lvol_verify 00:07:06.211 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:06.469 d08ff51e-ef3f-41eb-af98-fb7a92bf2a84 00:07:06.469 23:51:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:06.469 353f986f-946c-477d-a91c-6d6a599fdccd 00:07:06.469 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:06.728 /dev/nbd0 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:06.728 mke2fs 1.47.0 (5-Feb-2023) 00:07:06.728 Discarding device blocks: 0/4096 done 00:07:06.728 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:06.728 00:07:06.728 Allocating group tables: 0/1 done 00:07:06.728 Writing inode tables: 0/1 done 00:07:06.728 Creating journal (1024 blocks): done 00:07:06.728 Writing superblocks and filesystem accounting information: 0/1 done 00:07:06.728 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.728 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61438 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61438 ']' 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61438 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61438 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.987 killing process with pid 61438 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61438' 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61438 00:07:06.987 23:51:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61438 00:07:07.922 23:51:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:07.922 00:07:07.922 real 0m10.550s 00:07:07.922 user 0m15.165s 00:07:07.922 sys 0m3.502s 00:07:07.922 23:51:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.922 23:51:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:07.922 ************************************ 00:07:07.922 END TEST bdev_nbd 00:07:07.922 ************************************ 00:07:07.922 23:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:07.922 23:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:07.922 23:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:07.922 skipping fio tests on NVMe due to multi-ns failures. 00:07:07.922 23:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:07.922 23:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:07.922 23:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.922 23:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:07.922 23:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.922 23:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.922 ************************************ 00:07:07.922 START TEST bdev_verify 00:07:07.922 ************************************ 00:07:07.922 23:51:14 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.922 [2024-11-18 23:51:14.352695] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:07.922 [2024-11-18 23:51:14.352824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61844 ] 00:07:07.922 [2024-11-18 23:51:14.514391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.180 [2024-11-18 23:51:14.624498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.180 [2024-11-18 23:51:14.624617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.754 Running I/O for 5 seconds... 00:07:11.062 23296.00 IOPS, 91.00 MiB/s [2024-11-18T23:51:18.690Z] 24032.00 IOPS, 93.88 MiB/s [2024-11-18T23:51:19.629Z] 23360.00 IOPS, 91.25 MiB/s [2024-11-18T23:51:20.562Z] 23072.00 IOPS, 90.12 MiB/s [2024-11-18T23:51:20.562Z] 23590.40 IOPS, 92.15 MiB/s 00:07:13.870 Latency(us) 00:07:13.870 [2024-11-18T23:51:20.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.870 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x0 length 0xbd0bd 00:07:13.870 Nvme0n1 : 5.05 1697.36 6.63 0.00 0.00 75214.72 13913.80 80659.69 00:07:13.870 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:13.870 Nvme0n1 : 5.04 1626.63 6.35 0.00 0.00 78402.22 17039.36 84692.68 00:07:13.870 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x0 length 0x4ff80 00:07:13.870 Nvme1n1p1 : 5.06 1696.26 6.63 0.00 0.00 75110.71 15930.29 72997.02 00:07:13.870 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:13.870 Nvme1n1p1 : 5.06 1630.73 6.37 0.00 0.00 78003.61 7813.91 75013.51 00:07:13.870 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x0 length 0x4ff7f 00:07:13.870 Nvme1n1p2 : 5.06 1695.69 6.62 0.00 0.00 74942.02 17140.18 65334.35 00:07:13.870 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:13.870 Nvme1n1p2 : 5.06 1630.23 6.37 0.00 0.00 77869.99 8015.56 66947.54 00:07:13.870 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x0 length 0x80000 00:07:13.870 Nvme2n1 : 5.06 1695.15 6.62 0.00 0.00 74891.07 16736.89 64931.05 00:07:13.870 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x80000 length 0x80000 00:07:13.870 Nvme2n1 : 5.08 1638.79 6.40 0.00 0.00 77467.04 10233.70 63317.86 00:07:13.870 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x0 length 0x80000 00:07:13.870 Nvme2n2 : 5.06 1694.68 6.62 0.00 0.00 74777.82 16131.94 66947.54 00:07:13.870 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x80000 length 0x80000 00:07:13.870 Nvme2n2 : 5.08 1638.36 6.40 0.00 0.00 77316.20 10183.29 64527.75 00:07:13.870 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x0 length 0x80000 00:07:13.870 Nvme2n3 : 5.07 1703.45 6.65 0.00 0.00 74293.72 3831.34 68560.74 00:07:13.870 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.870 Verification LBA range: start 0x80000 length 0x80000 00:07:13.871 Nvme2n3 : 5.08 1637.86 6.40 0.00 0.00 77197.78 10233.70 67350.84 00:07:13.871 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.871 Verification LBA range: start 0x0 length 0x20000 00:07:13.871 Nvme3n1 : 5.08 1712.83 6.69 0.00 0.00 73785.02 7057.72 70173.93 00:07:13.871 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.871 Verification LBA range: start 0x20000 length 0x20000 00:07:13.871 Nvme3n1 : 5.08 1637.39 6.40 0.00 0.00 77079.18 10435.35 71383.83 00:07:13.871 [2024-11-18T23:51:20.563Z] =================================================================================================================== 00:07:13.871 [2024-11-18T23:51:20.563Z] Total : 23335.41 91.15 0.00 0.00 76138.00 3831.34 84692.68 00:07:15.246 00:07:15.246 real 0m7.548s 00:07:15.246 user 0m14.128s 00:07:15.246 sys 0m0.248s 00:07:15.246 23:51:21 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.246 23:51:21 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:15.246 ************************************ 00:07:15.246 END TEST bdev_verify 00:07:15.246 ************************************ 00:07:15.246 23:51:21 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:15.246 23:51:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:15.246 23:51:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.246 23:51:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.246 ************************************ 00:07:15.246 START TEST bdev_verify_big_io 00:07:15.246 ************************************ 00:07:15.246 23:51:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:15.246 [2024-11-18 23:51:21.931928] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:15.246 [2024-11-18 23:51:21.932022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61942 ] 00:07:15.506 [2024-11-18 23:51:22.087214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.767 [2024-11-18 23:51:22.199570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.767 [2024-11-18 23:51:22.199582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.339 Running I/O for 5 seconds... 00:07:22.979 1223.00 IOPS, 76.44 MiB/s [2024-11-18T23:51:29.671Z] 3488.50 IOPS, 218.03 MiB/s 00:07:22.979 Latency(us) 00:07:22.979 [2024-11-18T23:51:29.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.979 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x0 length 0xbd0b 00:07:22.979 Nvme0n1 : 5.97 91.44 5.72 0.00 0.00 1314929.52 15829.46 1793871.56 00:07:22.979 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:22.979 Nvme0n1 : 6.33 63.98 4.00 0.00 0.00 1853689.93 27625.94 2335904.69 00:07:22.979 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x0 length 0x4ff8 00:07:22.979 Nvme1n1p1 : 6.09 101.79 6.36 0.00 0.00 1144262.79 71383.83 1535760.54 00:07:22.979 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:22.979 Nvme1n1p1 : 6.42 67.49 4.22 0.00 0.00 1704112.04 59284.87 2400432.44 00:07:22.979 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x0 length 0x4ff7 00:07:22.979 Nvme1n1p2 : 6.09 104.87 6.55 0.00 0.00 1072568.16 116149.96 1284102.30 00:07:22.979 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:22.979 Nvme1n1p2 : 6.33 72.48 4.53 0.00 0.00 1518823.76 89128.96 1703532.70 00:07:22.979 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x0 length 0x8000 00:07:22.979 Nvme2n1 : 6.20 107.48 6.72 0.00 0.00 1000843.27 94775.14 1135688.47 00:07:22.979 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x8000 length 0x8000 00:07:22.979 Nvme2n1 : 6.50 71.50 4.47 0.00 0.00 1471034.17 80256.39 2555299.05 00:07:22.979 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x0 length 0x8000 00:07:22.979 Nvme2n2 : 6.38 117.06 7.32 0.00 0.00 882184.61 65737.65 1161499.57 00:07:22.979 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x8000 length 0x8000 00:07:22.979 Nvme2n2 : 6.51 76.24 4.77 0.00 0.00 1329687.63 84289.38 2606921.26 00:07:22.979 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x0 length 0x8000 00:07:22.979 Nvme2n3 : 6.45 128.65 8.04 0.00 0.00 772854.45 31457.28 1187310.67 00:07:22.979 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x8000 length 0x8000 00:07:22.979 Nvme2n3 : 6.61 89.36 5.59 0.00 0.00 1089907.99 8418.86 2671449.01 00:07:22.979 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x0 length 0x2000 00:07:22.979 Nvme3n1 : 6.59 154.75 9.67 0.00 0.00 617306.70 491.52 1206669.00 00:07:22.979 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.979 Verification LBA range: start 0x2000 length 0x2000 00:07:22.979 Nvme3n1 : 6.72 159.64 9.98 0.00 0.00 587426.91 680.57 1948738.17 00:07:22.979 [2024-11-18T23:51:29.671Z] =================================================================================================================== 00:07:22.979 [2024-11-18T23:51:29.671Z] Total : 1406.74 87.92 0.00 0.00 1058198.75 491.52 2671449.01 00:07:24.881 00:07:24.881 real 0m9.564s 00:07:24.881 user 0m18.173s 00:07:24.881 sys 0m0.265s 00:07:24.881 23:51:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.881 23:51:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:24.881 ************************************ 00:07:24.881 END TEST bdev_verify_big_io 00:07:24.881 ************************************ 00:07:24.881 23:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.881 23:51:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:24.881 23:51:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.881 23:51:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.881 ************************************ 00:07:24.881 START TEST bdev_write_zeroes 00:07:24.881 ************************************ 00:07:24.882 23:51:31 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.882 [2024-11-18 23:51:31.547539] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:24.882 [2024-11-18 23:51:31.547674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62062 ] 00:07:25.140 [2024-11-18 23:51:31.702862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.140 [2024-11-18 23:51:31.802185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.708 Running I/O for 1 seconds... 00:07:27.083 64448.00 IOPS, 251.75 MiB/s 00:07:27.083 Latency(us) 00:07:27.083 [2024-11-18T23:51:33.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.083 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.083 Nvme0n1 : 1.02 9188.92 35.89 0.00 0.00 13898.63 6654.42 24802.86 00:07:27.083 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.083 Nvme1n1p1 : 1.03 9177.53 35.85 0.00 0.00 13895.11 11443.59 24702.03 00:07:27.083 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.083 Nvme1n1p2 : 1.03 9166.17 35.81 0.00 0.00 13879.60 11443.59 23794.61 00:07:27.083 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.083 Nvme2n1 : 1.03 9155.82 35.76 0.00 0.00 13874.63 11594.83 23592.96 00:07:27.083 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.083 Nvme2n2 : 1.03 9145.44 35.72 0.00 0.00 13835.50 11494.01 22383.06 00:07:27.083 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.083 Nvme2n3 : 1.03 9135.15 35.68 0.00 0.00 13801.15 9326.28 23391.31 00:07:27.083 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.083 Nvme3n1 : 1.03 9062.80 35.40 0.00 0.00 13871.06 10334.52 25004.50 00:07:27.083 [2024-11-18T23:51:33.775Z] =================================================================================================================== 00:07:27.083 [2024-11-18T23:51:33.775Z] Total : 64031.84 250.12 0.00 0.00 13865.09 6654.42 25004.50 00:07:27.652 00:07:27.652 real 0m2.732s 00:07:27.652 user 0m2.392s 00:07:27.652 sys 0m0.224s 00:07:27.652 23:51:34 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.652 23:51:34 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:27.652 ************************************ 00:07:27.652 END TEST bdev_write_zeroes 00:07:27.652 ************************************ 00:07:27.652 23:51:34 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.652 23:51:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:27.652 23:51:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.652 23:51:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.652 ************************************ 00:07:27.652 START TEST bdev_json_nonenclosed 00:07:27.652 ************************************ 00:07:27.652 23:51:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.652 [2024-11-18 23:51:34.329392] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:27.652 [2024-11-18 23:51:34.329505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62115 ] 00:07:27.913 [2024-11-18 23:51:34.491508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.174 [2024-11-18 23:51:34.642767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.174 [2024-11-18 23:51:34.642917] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:28.174 [2024-11-18 23:51:34.642940] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:28.174 [2024-11-18 23:51:34.642952] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.435 00:07:28.435 real 0m0.604s 00:07:28.435 user 0m0.392s 00:07:28.435 sys 0m0.107s 00:07:28.435 23:51:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.435 ************************************ 00:07:28.435 END TEST bdev_json_nonenclosed 00:07:28.435 23:51:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:28.435 ************************************ 00:07:28.435 23:51:34 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.435 23:51:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:28.435 23:51:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.435 23:51:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.435 ************************************ 00:07:28.435 START TEST bdev_json_nonarray 00:07:28.435 ************************************ 00:07:28.435 23:51:34 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.435 [2024-11-18 23:51:35.011630] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:28.435 [2024-11-18 23:51:35.011781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62141 ] 00:07:28.696 [2024-11-18 23:51:35.175743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.697 [2024-11-18 23:51:35.333514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.697 [2024-11-18 23:51:35.333663] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:28.697 [2024-11-18 23:51:35.333687] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:28.697 [2024-11-18 23:51:35.333699] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.958 00:07:28.958 real 0m0.614s 00:07:28.958 user 0m0.382s 00:07:28.958 sys 0m0.126s 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.958 ************************************ 00:07:28.958 END TEST bdev_json_nonarray 00:07:28.958 ************************************ 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:28.958 23:51:35 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:28.958 23:51:35 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:28.958 23:51:35 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:28.958 23:51:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.958 23:51:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.958 23:51:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.958 ************************************ 00:07:28.958 START TEST bdev_gpt_uuid 00:07:28.958 ************************************ 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62166 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62166 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62166 ']' 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.958 23:51:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:29.219 [2024-11-18 23:51:35.726426] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:29.219 [2024-11-18 23:51:35.726574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62166 ] 00:07:29.219 [2024-11-18 23:51:35.891342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.504 [2024-11-18 23:51:35.998650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.148 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.148 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:30.148 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.148 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.148 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.409 Some configs were skipped because the RPC state that can call them passed over. 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:30.409 { 00:07:30.409 "name": "Nvme1n1p1", 00:07:30.409 "aliases": [ 00:07:30.409 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:30.409 ], 00:07:30.409 "product_name": "GPT Disk", 00:07:30.409 "block_size": 4096, 00:07:30.409 "num_blocks": 655104, 00:07:30.409 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:30.409 "assigned_rate_limits": { 00:07:30.409 "rw_ios_per_sec": 0, 00:07:30.409 "rw_mbytes_per_sec": 0, 00:07:30.409 "r_mbytes_per_sec": 0, 00:07:30.409 "w_mbytes_per_sec": 0 00:07:30.409 }, 00:07:30.409 "claimed": false, 00:07:30.409 "zoned": false, 00:07:30.409 "supported_io_types": { 00:07:30.409 "read": true, 00:07:30.409 "write": true, 00:07:30.409 "unmap": true, 00:07:30.409 "flush": true, 00:07:30.409 "reset": true, 00:07:30.409 "nvme_admin": false, 00:07:30.409 "nvme_io": false, 00:07:30.409 "nvme_io_md": false, 00:07:30.409 "write_zeroes": true, 00:07:30.409 "zcopy": false, 00:07:30.409 "get_zone_info": false, 00:07:30.409 "zone_management": false, 00:07:30.409 "zone_append": false, 00:07:30.409 "compare": true, 00:07:30.409 "compare_and_write": false, 00:07:30.409 "abort": true, 00:07:30.409 "seek_hole": false, 00:07:30.409 "seek_data": false, 00:07:30.409 "copy": true, 00:07:30.409 "nvme_iov_md": false 00:07:30.409 }, 00:07:30.409 "driver_specific": { 00:07:30.409 "gpt": { 00:07:30.409 "base_bdev": "Nvme1n1", 00:07:30.409 "offset_blocks": 256, 00:07:30.409 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:30.409 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:30.409 "partition_name": "SPDK_TEST_first" 00:07:30.409 } 00:07:30.409 } 00:07:30.409 } 00:07:30.409 ]' 00:07:30.409 23:51:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:30.409 { 00:07:30.409 "name": "Nvme1n1p2", 00:07:30.409 "aliases": [ 00:07:30.409 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:30.409 ], 00:07:30.409 "product_name": "GPT Disk", 00:07:30.409 "block_size": 4096, 00:07:30.409 "num_blocks": 655103, 00:07:30.409 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:30.409 "assigned_rate_limits": { 00:07:30.409 "rw_ios_per_sec": 0, 00:07:30.409 "rw_mbytes_per_sec": 0, 00:07:30.409 "r_mbytes_per_sec": 0, 00:07:30.409 "w_mbytes_per_sec": 0 00:07:30.409 }, 00:07:30.409 "claimed": false, 00:07:30.409 "zoned": false, 00:07:30.409 "supported_io_types": { 00:07:30.409 "read": true, 00:07:30.409 "write": true, 00:07:30.409 "unmap": true, 00:07:30.409 "flush": true, 00:07:30.409 "reset": true, 00:07:30.409 "nvme_admin": false, 00:07:30.409 "nvme_io": false, 00:07:30.409 "nvme_io_md": false, 00:07:30.409 "write_zeroes": true, 00:07:30.409 "zcopy": false, 00:07:30.409 "get_zone_info": false, 00:07:30.409 "zone_management": false, 00:07:30.409 "zone_append": false, 00:07:30.409 "compare": true, 00:07:30.409 "compare_and_write": false, 00:07:30.409 "abort": true, 00:07:30.409 "seek_hole": false, 00:07:30.409 "seek_data": false, 00:07:30.409 "copy": true, 00:07:30.409 "nvme_iov_md": false 00:07:30.409 }, 00:07:30.409 "driver_specific": { 00:07:30.409 "gpt": { 00:07:30.409 "base_bdev": "Nvme1n1", 00:07:30.409 "offset_blocks": 655360, 00:07:30.409 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:30.409 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:30.409 "partition_name": "SPDK_TEST_second" 00:07:30.409 } 00:07:30.409 } 00:07:30.409 } 00:07:30.409 ]' 00:07:30.409 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62166 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62166 ']' 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62166 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62166 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.671 killing process with pid 62166 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62166' 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62166 00:07:30.671 23:51:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62166 00:07:32.578 00:07:32.578 real 0m3.123s 00:07:32.578 user 0m3.199s 00:07:32.578 sys 0m0.439s 00:07:32.578 23:51:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.578 23:51:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:32.578 ************************************ 00:07:32.578 END TEST bdev_gpt_uuid 00:07:32.578 ************************************ 00:07:32.578 23:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:32.578 23:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:32.578 23:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:32.578 23:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:32.578 23:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:32.578 23:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:32.578 23:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:32.578 23:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:32.578 23:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:32.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:32.578 Waiting for block devices as requested 00:07:32.578 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:32.835 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:32.835 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:32.835 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.100 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:38.100 23:51:44 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:38.100 23:51:44 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:38.358 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:38.358 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:38.358 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:38.358 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:38.358 23:51:44 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:38.358 00:07:38.358 real 0m57.075s 00:07:38.358 user 1m13.455s 00:07:38.358 sys 0m8.143s 00:07:38.358 23:51:44 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.358 ************************************ 00:07:38.358 END TEST blockdev_nvme_gpt 00:07:38.358 ************************************ 00:07:38.358 23:51:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.358 23:51:44 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:38.358 23:51:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.358 23:51:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.358 23:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.358 ************************************ 00:07:38.358 START TEST nvme 00:07:38.358 ************************************ 00:07:38.358 23:51:44 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:38.358 * Looking for test storage... 00:07:38.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:38.358 23:51:44 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.358 23:51:44 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.358 23:51:44 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.358 23:51:44 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.358 23:51:44 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.358 23:51:44 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.358 23:51:44 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.358 23:51:44 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.358 23:51:44 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.358 23:51:44 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.358 23:51:44 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.358 23:51:44 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.358 23:51:44 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.358 23:51:44 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.358 23:51:44 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.358 23:51:44 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:38.358 23:51:44 nvme -- scripts/common.sh@345 -- # : 1 00:07:38.358 23:51:44 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.358 23:51:44 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.359 23:51:44 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:38.359 23:51:44 nvme -- scripts/common.sh@353 -- # local d=1 00:07:38.359 23:51:44 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.359 23:51:44 nvme -- scripts/common.sh@355 -- # echo 1 00:07:38.359 23:51:44 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.359 23:51:44 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:38.359 23:51:44 nvme -- scripts/common.sh@353 -- # local d=2 00:07:38.359 23:51:44 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.359 23:51:44 nvme -- scripts/common.sh@355 -- # echo 2 00:07:38.359 23:51:44 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.359 23:51:44 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.359 23:51:44 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.359 23:51:44 nvme -- scripts/common.sh@368 -- # return 0 00:07:38.359 23:51:44 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.359 23:51:44 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.359 --rc genhtml_branch_coverage=1 00:07:38.359 --rc genhtml_function_coverage=1 00:07:38.359 --rc genhtml_legend=1 00:07:38.359 --rc geninfo_all_blocks=1 00:07:38.359 --rc geninfo_unexecuted_blocks=1 00:07:38.359 00:07:38.359 ' 00:07:38.359 23:51:44 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.359 --rc genhtml_branch_coverage=1 00:07:38.359 --rc genhtml_function_coverage=1 00:07:38.359 --rc genhtml_legend=1 00:07:38.359 --rc geninfo_all_blocks=1 00:07:38.359 --rc geninfo_unexecuted_blocks=1 00:07:38.359 00:07:38.359 ' 00:07:38.359 23:51:44 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.359 --rc genhtml_branch_coverage=1 00:07:38.359 --rc genhtml_function_coverage=1 00:07:38.359 --rc genhtml_legend=1 00:07:38.359 --rc geninfo_all_blocks=1 00:07:38.359 --rc geninfo_unexecuted_blocks=1 00:07:38.359 00:07:38.359 ' 00:07:38.359 23:51:44 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.359 --rc genhtml_branch_coverage=1 00:07:38.359 --rc genhtml_function_coverage=1 00:07:38.359 --rc genhtml_legend=1 00:07:38.359 --rc geninfo_all_blocks=1 00:07:38.359 --rc geninfo_unexecuted_blocks=1 00:07:38.359 00:07:38.359 ' 00:07:38.359 23:51:44 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:38.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.184 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.184 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.184 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.442 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.442 23:51:45 nvme -- nvme/nvme.sh@79 -- # uname 00:07:39.442 23:51:45 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:39.442 23:51:45 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:39.442 23:51:45 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:39.442 23:51:45 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:39.442 23:51:45 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:39.442 23:51:45 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:39.442 23:51:45 nvme -- common/autotest_common.sh@1075 -- # stubpid=62801 00:07:39.442 Waiting for stub to ready for secondary processes... 00:07:39.442 23:51:45 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:39.442 23:51:45 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:39.442 23:51:45 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:39.442 23:51:45 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62801 ]] 00:07:39.442 23:51:45 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:39.442 [2024-11-18 23:51:46.005314] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:39.442 [2024-11-18 23:51:46.005434] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:40.383 [2024-11-18 23:51:46.772518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.383 [2024-11-18 23:51:46.870869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.383 [2024-11-18 23:51:46.871206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.383 [2024-11-18 23:51:46.871221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.383 [2024-11-18 23:51:46.884901] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:40.383 [2024-11-18 23:51:46.885041] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:40.383 [2024-11-18 23:51:46.896693] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:40.383 [2024-11-18 23:51:46.896887] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:40.383 [2024-11-18 23:51:46.899104] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:40.383 [2024-11-18 23:51:46.899394] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:40.383 [2024-11-18 23:51:46.899594] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:40.383 [2024-11-18 23:51:46.901615] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:40.383 [2024-11-18 23:51:46.901789] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:40.383 [2024-11-18 23:51:46.901871] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:40.383 [2024-11-18 23:51:46.903934] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:40.383 [2024-11-18 23:51:46.904175] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:40.383 [2024-11-18 23:51:46.904267] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:40.383 [2024-11-18 23:51:46.904329] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:40.383 [2024-11-18 23:51:46.904427] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:40.383 23:51:46 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:40.383 23:51:46 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:40.383 done. 00:07:40.383 23:51:46 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:40.383 23:51:46 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:40.383 23:51:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.383 23:51:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.383 ************************************ 00:07:40.383 START TEST nvme_reset 00:07:40.383 ************************************ 00:07:40.383 23:51:46 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:40.641 Initializing NVMe Controllers 00:07:40.641 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:40.641 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:40.641 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:40.641 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:40.641 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:40.641 00:07:40.641 real 0m0.219s 00:07:40.641 user 0m0.071s 00:07:40.641 sys 0m0.100s 00:07:40.641 23:51:47 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.641 23:51:47 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:40.641 ************************************ 00:07:40.641 END TEST nvme_reset 00:07:40.641 ************************************ 00:07:40.641 23:51:47 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:40.641 23:51:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.641 23:51:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.642 23:51:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.642 ************************************ 00:07:40.642 START TEST nvme_identify 00:07:40.642 ************************************ 00:07:40.642 23:51:47 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:40.642 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:40.642 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:40.642 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:40.642 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:40.642 23:51:47 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:40.642 23:51:47 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:40.642 23:51:47 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:40.642 23:51:47 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:40.642 23:51:47 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:40.642 23:51:47 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:40.642 23:51:47 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:40.642 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:40.902 ===================================================== 00:07:40.902 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:40.902 ===================================================== 00:07:40.902 Controller Capabilities/Features 00:07:40.902 ================================ 00:07:40.902 Vendor ID: 1b36 00:07:40.902 Subsystem Vendor ID: 1af4 00:07:40.902 Serial Number: 12340 00:07:40.902 Model Number: QEMU NVMe Ctrl 00:07:40.902 Firmware Version: 8.0.0 00:07:40.902 Recommended Arb Burst: 6 00:07:40.902 IEEE OUI Identifier: 00 54 52 00:07:40.902 Multi-path I/O 00:07:40.902 May have multiple subsystem ports: No 00:07:40.902 May have multiple controllers: No 00:07:40.902 Associated with SR-IOV VF: No 00:07:40.902 Max Data Transfer Size: 524288 00:07:40.902 Max Number of Namespaces: 256 00:07:40.902 Max Number of I/O Queues: 64 00:07:40.902 NVMe Specification Version (VS): 1.4 00:07:40.902 NVMe Specification Version (Identify): 1.4 00:07:40.902 Maximum Queue Entries: 2048 00:07:40.902 Contiguous Queues Required: Yes 00:07:40.902 Arbitration Mechanisms Supported 00:07:40.902 Weighted Round Robin: Not Supported 00:07:40.902 Vendor Specific: Not Supported 00:07:40.902 Reset Timeout: 7500 ms 00:07:40.902 Doorbell Stride: 4 bytes 00:07:40.902 NVM Subsystem Reset: Not Supported 00:07:40.902 Command Sets Supported 00:07:40.902 NVM Command Set: Supported 00:07:40.902 Boot Partition: Not Supported 00:07:40.902 Memory Page Size Minimum: 4096 bytes 00:07:40.903 Memory Page Size Maximum: 65536 bytes 00:07:40.903 Persistent Memory Region: Not Supported 00:07:40.903 Optional Asynchronous Events Supported 00:07:40.903 Namespace Attribute Notices: Supported 00:07:40.903 Firmware Activation Notices: Not Supported 00:07:40.903 ANA Change Notices: Not Supported 00:07:40.903 PLE Aggregate Log Change Notices: Not Supported 00:07:40.903 LBA Status Info Alert Notices: Not Supported 00:07:40.903 EGE Aggregate Log Change Notices: Not Supported 00:07:40.903 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.903 Zone Descriptor Change Notices: Not Supported 00:07:40.903 Discovery Log Change Notices: Not Supported 00:07:40.903 Controller Attributes 00:07:40.903 128-bit Host Identifier: Not Supported 00:07:40.903 Non-Operational Permissive Mode: Not Supported 00:07:40.903 NVM Sets: Not Supported 00:07:40.903 Read Recovery Levels: Not Supported 00:07:40.903 Endurance Groups: Not Supported 00:07:40.903 Predictable Latency Mode: Not Supported 00:07:40.903 Traffic Based Keep ALive: Not Supported 00:07:40.903 Namespace Granularity: Not Supported 00:07:40.903 SQ Associations: Not Supported 00:07:40.903 UUID List: Not Supported 00:07:40.903 Multi-Domain Subsystem: Not Supported 00:07:40.903 Fixed Capacity Management: Not Supported 00:07:40.903 Variable Capacity Management: Not Supported 00:07:40.903 Delete Endurance Group: Not Supported 00:07:40.903 Delete NVM Set: Not Supported 00:07:40.903 Extended LBA Formats Supported: Supported 00:07:40.903 Flexible Data Placement Supported: Not Supported 00:07:40.903 00:07:40.903 Controller Memory Buffer Support 00:07:40.903 ================================ 00:07:40.903 Supported: No 00:07:40.903 00:07:40.903 Persistent Memory Region Support 00:07:40.903 ================================ 00:07:40.903 Supported: No 00:07:40.903 00:07:40.903 Admin Command Set Attributes 00:07:40.903 ============================ 00:07:40.903 Security Send/Receive: Not Supported 00:07:40.903 Format NVM: Supported 00:07:40.903 Firmware Activate/Download: Not Supported 00:07:40.903 Namespace Management: Supported 00:07:40.903 Device Self-Test: Not Supported 00:07:40.903 Directives: Supported 00:07:40.903 NVMe-MI: Not Supported 00:07:40.903 Virtualization Management: Not Supported 00:07:40.903 Doorbell Buffer Config: Supported 00:07:40.903 Get LBA Status Capability: Not Supported 00:07:40.903 Command & Feature Lockdown Capability: Not Supported 00:07:40.903 Abort Command Limit: 4 00:07:40.903 Async Event Request Limit: 4 00:07:40.903 Number of Firmware Slots: N/A 00:07:40.903 Firmware Slot 1 Read-Only: N/A 00:07:40.903 Firmware Activation Without Reset: N/A 00:07:40.903 Multiple Update Detection Support: N/A 00:07:40.903 Firmware Update Granularity: No Information Provided 00:07:40.903 Per-Namespace SMART Log: Yes 00:07:40.903 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.903 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:40.903 Command Effects Log Page: Supported 00:07:40.903 Get Log Page Extended Data: Supported 00:07:40.903 Telemetry Log Pages: Not Supported 00:07:40.903 Persistent Event Log Pages: Not Supported 00:07:40.903 Supported Log Pages Log Page: May Support 00:07:40.903 Commands Supported & Effects Log Page: Not Supported 00:07:40.903 Feature Identifiers & Effects Log Page:May Support 00:07:40.903 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.903 Data Area 4 for Telemetry Log: Not Supported 00:07:40.903 Error Log Page Entries Supported: 1 00:07:40.903 Keep Alive: Not Supported 00:07:40.903 00:07:40.903 NVM Command Set Attributes 00:07:40.903 ========================== 00:07:40.903 Submission Queue Entry Size 00:07:40.903 Max: 64 00:07:40.903 Min: 64 00:07:40.903 Completion Queue Entry Size 00:07:40.903 Max: 16 00:07:40.903 Min: 16 00:07:40.903 Number of Namespaces: 256 00:07:40.903 Compare Command: Supported 00:07:40.903 Write Uncorrectable Command: Not Supported 00:07:40.903 Dataset Management Command: Supported 00:07:40.903 Write Zeroes Command: Supported 00:07:40.903 Set Features Save Field: Supported 00:07:40.903 Reservations: Not Supported 00:07:40.903 Timestamp: Supported 00:07:40.903 Copy: Supported 00:07:40.903 Volatile Write Cache: Present 00:07:40.903 Atomic Write Unit (Normal): 1 00:07:40.903 Atomic Write Unit (PFail): 1 00:07:40.903 Atomic Compare & Write Unit: 1 00:07:40.903 Fused Compare & Write: Not Supported 00:07:40.903 Scatter-Gather List 00:07:40.903 SGL Command Set: Supported 00:07:40.903 SGL Keyed: Not Supported 00:07:40.903 SGL Bit Bucket Descriptor: Not Supported 00:07:40.903 SGL Metadata Pointer: Not Supported 00:07:40.903 Oversized SGL: Not Supported 00:07:40.903 SGL Metadata Address: Not Supported 00:07:40.903 SGL Offset: Not Supported 00:07:40.903 Transport SGL Data Block: Not Supported 00:07:40.903 Replay Protected Memory Block: Not Supported 00:07:40.903 00:07:40.903 Firmware Slot Information 00:07:40.903 ========================= 00:07:40.903 Active slot: 1 00:07:40.903 Slot 1 Firmware Revision: 1.0 00:07:40.903 00:07:40.903 00:07:40.903 Commands Supported and Effects 00:07:40.903 ============================== 00:07:40.903 Admin Commands 00:07:40.903 -------------- 00:07:40.903 Delete I/O Submission Queue (00h): Supported 00:07:40.903 Create I/O Submission Queue (01h): Supported 00:07:40.903 Get Log Page (02h): Supported 00:07:40.903 Delete I/O Completion Queue (04h): Supported 00:07:40.903 Create I/O Completion Queue (05h): Supported 00:07:40.903 Identify (06h): Supported 00:07:40.903 Abort (08h): Supported 00:07:40.903 Set Features (09h): Supported 00:07:40.903 Get Features (0Ah): Supported 00:07:40.903 Asynchronous Event Request (0Ch): Supported 00:07:40.903 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.903 Directive Send (19h): Supported 00:07:40.903 Directive Receive (1Ah): Supported 00:07:40.903 Virtualization Management (1Ch): Supported 00:07:40.903 Doorbell Buffer Config (7Ch): Supported 00:07:40.903 Format NVM (80h): Supported LBA-Change 00:07:40.903 I/O Commands 00:07:40.903 ------------ 00:07:40.903 Flush (00h): Supported LBA-Change 00:07:40.903 Write (01h): Supported LBA-Change 00:07:40.903 Read (02h): Supported 00:07:40.903 Compare (05h): Supported 00:07:40.903 Write Zeroes (08h): Supported LBA-Change 00:07:40.903 Dataset Management (09h): Supported LBA-Change 00:07:40.903 Unknown (0Ch): Supported 00:07:40.903 Unknown (12h): Supported 00:07:40.903 Copy (19h): Supported LBA-Change 00:07:40.903 Unknown (1Dh): Supported LBA-Change 00:07:40.903 00:07:40.903 Error Log 00:07:40.903 ========= 00:07:40.903 00:07:40.903 Arbitration 00:07:40.903 =========== 00:07:40.903 Arbitration Burst: no limit 00:07:40.903 00:07:40.903 Power Management 00:07:40.903 ================ 00:07:40.903 Number of Power States: 1 00:07:40.903 Current Power State: Power State #0 00:07:40.903 Power State #0: 00:07:40.903 Max Power: 25.00 W 00:07:40.903 Non-Operational State: Operational 00:07:40.903 Entry Latency: 16 microseconds 00:07:40.903 Exit Latency: 4 microseconds 00:07:40.903 Relative Read Throughput: 0 00:07:40.903 Relative Read Latency: 0 00:07:40.903 Relative Write Throughput: 0 00:07:40.903 Relative Write Latency: 0 00:07:40.903 Idle Power[2024-11-18 23:51:47.488755] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62822 terminated unexpected 00:07:40.903 [2024-11-18 23:51:47.489637] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62822 terminated unexpected 00:07:40.903 : Not Reported 00:07:40.903 Active Power: Not Reported 00:07:40.903 Non-Operational Permissive Mode: Not Supported 00:07:40.903 00:07:40.903 Health Information 00:07:40.903 ================== 00:07:40.903 Critical Warnings: 00:07:40.903 Available Spare Space: OK 00:07:40.903 Temperature: OK 00:07:40.903 Device Reliability: OK 00:07:40.903 Read Only: No 00:07:40.903 Volatile Memory Backup: OK 00:07:40.903 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.903 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.903 Available Spare: 0% 00:07:40.903 Available Spare Threshold: 0% 00:07:40.903 Life Percentage Used: 0% 00:07:40.903 Data Units Read: 647 00:07:40.903 Data Units Written: 576 00:07:40.903 Host Read Commands: 39037 00:07:40.903 Host Write Commands: 38823 00:07:40.903 Controller Busy Time: 0 minutes 00:07:40.903 Power Cycles: 0 00:07:40.903 Power On Hours: 0 hours 00:07:40.903 Unsafe Shutdowns: 0 00:07:40.903 Unrecoverable Media Errors: 0 00:07:40.903 Lifetime Error Log Entries: 0 00:07:40.903 Warning Temperature Time: 0 minutes 00:07:40.903 Critical Temperature Time: 0 minutes 00:07:40.903 00:07:40.904 Number of Queues 00:07:40.904 ================ 00:07:40.904 Number of I/O Submission Queues: 64 00:07:40.904 Number of I/O Completion Queues: 64 00:07:40.904 00:07:40.904 ZNS Specific Controller Data 00:07:40.904 ============================ 00:07:40.904 Zone Append Size Limit: 0 00:07:40.904 00:07:40.904 00:07:40.904 Active Namespaces 00:07:40.904 ================= 00:07:40.904 Namespace ID:1 00:07:40.904 Error Recovery Timeout: Unlimited 00:07:40.904 Command Set Identifier: NVM (00h) 00:07:40.904 Deallocate: Supported 00:07:40.904 Deallocated/Unwritten Error: Supported 00:07:40.904 Deallocated Read Value: All 0x00 00:07:40.904 Deallocate in Write Zeroes: Not Supported 00:07:40.904 Deallocated Guard Field: 0xFFFF 00:07:40.904 Flush: Supported 00:07:40.904 Reservation: Not Supported 00:07:40.904 Metadata Transferred as: Separate Metadata Buffer 00:07:40.904 Namespace Sharing Capabilities: Private 00:07:40.904 Size (in LBAs): 1548666 (5GiB) 00:07:40.904 Capacity (in LBAs): 1548666 (5GiB) 00:07:40.904 Utilization (in LBAs): 1548666 (5GiB) 00:07:40.904 Thin Provisioning: Not Supported 00:07:40.904 Per-NS Atomic Units: No 00:07:40.904 Maximum Single Source Range Length: 128 00:07:40.904 Maximum Copy Length: 128 00:07:40.904 Maximum Source Range Count: 128 00:07:40.904 NGUID/EUI64 Never Reused: No 00:07:40.904 Namespace Write Protected: No 00:07:40.904 Number of LBA Formats: 8 00:07:40.904 Current LBA Format: LBA Format #07 00:07:40.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.904 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.904 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.904 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.904 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.904 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.904 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.904 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.904 00:07:40.904 NVM Specific Namespace Data 00:07:40.904 =========================== 00:07:40.904 Logical Block Storage Tag Mask: 0 00:07:40.904 Protection Information Capabilities: 00:07:40.904 16b Guard Protection Information Storage Tag Support: No 00:07:40.904 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.904 Storage Tag Check Read Support: No 00:07:40.904 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.904 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.904 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.904 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.904 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.904 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.904 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.904 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.904 ===================================================== 00:07:40.904 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:40.904 ===================================================== 00:07:40.904 Controller Capabilities/Features 00:07:40.904 ================================ 00:07:40.904 Vendor ID: 1b36 00:07:40.904 Subsystem Vendor ID: 1af4 00:07:40.904 Serial Number: 12341 00:07:40.904 Model Number: QEMU NVMe Ctrl 00:07:40.904 Firmware Version: 8.0.0 00:07:40.904 Recommended Arb Burst: 6 00:07:40.904 IEEE OUI Identifier: 00 54 52 00:07:40.904 Multi-path I/O 00:07:40.904 May have multiple subsystem ports: No 00:07:40.904 May have multiple controllers: No 00:07:40.904 Associated with SR-IOV VF: No 00:07:40.904 Max Data Transfer Size: 524288 00:07:40.904 Max Number of Namespaces: 256 00:07:40.904 Max Number of I/O Queues: 64 00:07:40.904 NVMe Specification Version (VS): 1.4 00:07:40.904 NVMe Specification Version (Identify): 1.4 00:07:40.904 Maximum Queue Entries: 2048 00:07:40.904 Contiguous Queues Required: Yes 00:07:40.904 Arbitration Mechanisms Supported 00:07:40.904 Weighted Round Robin: Not Supported 00:07:40.904 Vendor Specific: Not Supported 00:07:40.904 Reset Timeout: 7500 ms 00:07:40.904 Doorbell Stride: 4 bytes 00:07:40.904 NVM Subsystem Reset: Not Supported 00:07:40.904 Command Sets Supported 00:07:40.904 NVM Command Set: Supported 00:07:40.904 Boot Partition: Not Supported 00:07:40.904 Memory Page Size Minimum: 4096 bytes 00:07:40.904 Memory Page Size Maximum: 65536 bytes 00:07:40.904 Persistent Memory Region: Not Supported 00:07:40.904 Optional Asynchronous Events Supported 00:07:40.904 Namespace Attribute Notices: Supported 00:07:40.904 Firmware Activation Notices: Not Supported 00:07:40.904 ANA Change Notices: Not Supported 00:07:40.904 PLE Aggregate Log Change Notices: Not Supported 00:07:40.904 LBA Status Info Alert Notices: Not Supported 00:07:40.904 EGE Aggregate Log Change Notices: Not Supported 00:07:40.904 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.904 Zone Descriptor Change Notices: Not Supported 00:07:40.904 Discovery Log Change Notices: Not Supported 00:07:40.904 Controller Attributes 00:07:40.904 128-bit Host Identifier: Not Supported 00:07:40.904 Non-Operational Permissive Mode: Not Supported 00:07:40.904 NVM Sets: Not Supported 00:07:40.904 Read Recovery Levels: Not Supported 00:07:40.904 Endurance Groups: Not Supported 00:07:40.904 Predictable Latency Mode: Not Supported 00:07:40.904 Traffic Based Keep ALive: Not Supported 00:07:40.904 Namespace Granularity: Not Supported 00:07:40.904 SQ Associations: Not Supported 00:07:40.904 UUID List: Not Supported 00:07:40.904 Multi-Domain Subsystem: Not Supported 00:07:40.904 Fixed Capacity Management: Not Supported 00:07:40.904 Variable Capacity Management: Not Supported 00:07:40.904 Delete Endurance Group: Not Supported 00:07:40.904 Delete NVM Set: Not Supported 00:07:40.904 Extended LBA Formats Supported: Supported 00:07:40.904 Flexible Data Placement Supported: Not Supported 00:07:40.904 00:07:40.904 Controller Memory Buffer Support 00:07:40.904 ================================ 00:07:40.904 Supported: No 00:07:40.904 00:07:40.904 Persistent Memory Region Support 00:07:40.904 ================================ 00:07:40.904 Supported: No 00:07:40.904 00:07:40.904 Admin Command Set Attributes 00:07:40.904 ============================ 00:07:40.904 Security Send/Receive: Not Supported 00:07:40.904 Format NVM: Supported 00:07:40.904 Firmware Activate/Download: Not Supported 00:07:40.904 Namespace Management: Supported 00:07:40.904 Device Self-Test: Not Supported 00:07:40.904 Directives: Supported 00:07:40.904 NVMe-MI: Not Supported 00:07:40.904 Virtualization Management: Not Supported 00:07:40.904 Doorbell Buffer Config: Supported 00:07:40.904 Get LBA Status Capability: Not Supported 00:07:40.904 Command & Feature Lockdown Capability: Not Supported 00:07:40.904 Abort Command Limit: 4 00:07:40.904 Async Event Request Limit: 4 00:07:40.904 Number of Firmware Slots: N/A 00:07:40.904 Firmware Slot 1 Read-Only: N/A 00:07:40.904 Firmware Activation Without Reset: N/A 00:07:40.904 Multiple Update Detection Support: N/A 00:07:40.904 Firmware Update Granularity: No Information Provided 00:07:40.904 Per-Namespace SMART Log: Yes 00:07:40.904 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.904 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:40.904 Command Effects Log Page: Supported 00:07:40.904 Get Log Page Extended Data: Supported 00:07:40.904 Telemetry Log Pages: Not Supported 00:07:40.904 Persistent Event Log Pages: Not Supported 00:07:40.904 Supported Log Pages Log Page: May Support 00:07:40.904 Commands Supported & Effects Log Page: Not Supported 00:07:40.904 Feature Identifiers & Effects Log Page:May Support 00:07:40.904 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.904 Data Area 4 for Telemetry Log: Not Supported 00:07:40.904 Error Log Page Entries Supported: 1 00:07:40.904 Keep Alive: Not Supported 00:07:40.904 00:07:40.904 NVM Command Set Attributes 00:07:40.904 ========================== 00:07:40.904 Submission Queue Entry Size 00:07:40.904 Max: 64 00:07:40.904 Min: 64 00:07:40.904 Completion Queue Entry Size 00:07:40.904 Max: 16 00:07:40.904 Min: 16 00:07:40.904 Number of Namespaces: 256 00:07:40.904 Compare Command: Supported 00:07:40.904 Write Uncorrectable Command: Not Supported 00:07:40.904 Dataset Management Command: Supported 00:07:40.904 Write Zeroes Command: Supported 00:07:40.904 Set Features Save Field: Supported 00:07:40.904 Reservations: Not Supported 00:07:40.904 Timestamp: Supported 00:07:40.904 Copy: Supported 00:07:40.904 Volatile Write Cache: Present 00:07:40.905 Atomic Write Unit (Normal): 1 00:07:40.905 Atomic Write Unit (PFail): 1 00:07:40.905 Atomic Compare & Write Unit: 1 00:07:40.905 Fused Compare & Write: Not Supported 00:07:40.905 Scatter-Gather List 00:07:40.905 SGL Command Set: Supported 00:07:40.905 SGL Keyed: Not Supported 00:07:40.905 SGL Bit Bucket Descriptor: Not Supported 00:07:40.905 SGL Metadata Pointer: Not Supported 00:07:40.905 Oversized SGL: Not Supported 00:07:40.905 SGL Metadata Address: Not Supported 00:07:40.905 SGL Offset: Not Supported 00:07:40.905 Transport SGL Data Block: Not Supported 00:07:40.905 Replay Protected Memory Block: Not Supported 00:07:40.905 00:07:40.905 Firmware Slot Information 00:07:40.905 ========================= 00:07:40.905 Active slot: 1 00:07:40.905 Slot 1 Firmware Revision: 1.0 00:07:40.905 00:07:40.905 00:07:40.905 Commands Supported and Effects 00:07:40.905 ============================== 00:07:40.905 Admin Commands 00:07:40.905 -------------- 00:07:40.905 Delete I/O Submission Queue (00h): Supported 00:07:40.905 Create I/O Submission Queue (01h): Supported 00:07:40.905 Get Log Page (02h): Supported 00:07:40.905 Delete I/O Completion Queue (04h): Supported 00:07:40.905 Create I/O Completion Queue (05h): Supported 00:07:40.905 Identify (06h): Supported 00:07:40.905 Abort (08h): Supported 00:07:40.905 Set Features (09h): Supported 00:07:40.905 Get Features (0Ah): Supported 00:07:40.905 Asynchronous Event Request (0Ch): Supported 00:07:40.905 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.905 Directive Send (19h): Supported 00:07:40.905 Directive Receive (1Ah): Supported 00:07:40.905 Virtualization Management (1Ch): Supported 00:07:40.905 Doorbell Buffer Config (7Ch): Supported 00:07:40.905 Format NVM (80h): Supported LBA-Change 00:07:40.905 I/O Commands 00:07:40.905 ------------ 00:07:40.905 Flush (00h): Supported LBA-Change 00:07:40.905 Write (01h): Supported LBA-Change 00:07:40.905 Read (02h): Supported 00:07:40.905 Compare (05h): Supported 00:07:40.905 Write Zeroes (08h): Supported LBA-Change 00:07:40.905 Dataset Management (09h): Supported LBA-Change 00:07:40.905 Unknown (0Ch): Supported 00:07:40.905 Unknown (12h): Supported 00:07:40.905 Copy (19h): Supported LBA-Change 00:07:40.905 Unknown (1Dh): Supported LBA-Change 00:07:40.905 00:07:40.905 Error Log 00:07:40.905 ========= 00:07:40.905 00:07:40.905 Arbitration 00:07:40.905 =========== 00:07:40.905 Arbitration Burst: no limit 00:07:40.905 00:07:40.905 Power Management 00:07:40.905 ================ 00:07:40.905 Number of Power States: 1 00:07:40.905 Current Power State: Power State #0 00:07:40.905 Power State #0: 00:07:40.905 Max Power: 25.00 W 00:07:40.905 Non-Operational State: Operational 00:07:40.905 Entry Latency: 16 microseconds 00:07:40.905 Exit Latency: 4 microseconds 00:07:40.905 Relative Read Throughput: 0 00:07:40.905 Relative Read Latency: 0 00:07:40.905 Relative Write Throughput: 0 00:07:40.905 Relative Write Latency: 0 00:07:40.905 Idle Power: Not Reported 00:07:40.905 Active Power: Not Reported 00:07:40.905 Non-Operational Permissive Mode: Not Supported 00:07:40.905 00:07:40.905 Health Information 00:07:40.905 ================== 00:07:40.905 Critical Warnings: 00:07:40.905 Available Spare Space: OK 00:07:40.905 Temperature: [2024-11-18 23:51:47.490951] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62822 terminated unexpected 00:07:40.905 OK 00:07:40.905 Device Reliability: OK 00:07:40.905 Read Only: No 00:07:40.905 Volatile Memory Backup: OK 00:07:40.905 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.905 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.905 Available Spare: 0% 00:07:40.905 Available Spare Threshold: 0% 00:07:40.905 Life Percentage Used: 0% 00:07:40.905 Data Units Read: 1004 00:07:40.905 Data Units Written: 871 00:07:40.905 Host Read Commands: 58203 00:07:40.905 Host Write Commands: 56976 00:07:40.905 Controller Busy Time: 0 minutes 00:07:40.905 Power Cycles: 0 00:07:40.905 Power On Hours: 0 hours 00:07:40.905 Unsafe Shutdowns: 0 00:07:40.905 Unrecoverable Media Errors: 0 00:07:40.905 Lifetime Error Log Entries: 0 00:07:40.905 Warning Temperature Time: 0 minutes 00:07:40.905 Critical Temperature Time: 0 minutes 00:07:40.905 00:07:40.905 Number of Queues 00:07:40.905 ================ 00:07:40.905 Number of I/O Submission Queues: 64 00:07:40.905 Number of I/O Completion Queues: 64 00:07:40.905 00:07:40.905 ZNS Specific Controller Data 00:07:40.905 ============================ 00:07:40.905 Zone Append Size Limit: 0 00:07:40.905 00:07:40.905 00:07:40.905 Active Namespaces 00:07:40.905 ================= 00:07:40.905 Namespace ID:1 00:07:40.905 Error Recovery Timeout: Unlimited 00:07:40.905 Command Set Identifier: NVM (00h) 00:07:40.905 Deallocate: Supported 00:07:40.905 Deallocated/Unwritten Error: Supported 00:07:40.905 Deallocated Read Value: All 0x00 00:07:40.905 Deallocate in Write Zeroes: Not Supported 00:07:40.905 Deallocated Guard Field: 0xFFFF 00:07:40.905 Flush: Supported 00:07:40.905 Reservation: Not Supported 00:07:40.905 Namespace Sharing Capabilities: Private 00:07:40.905 Size (in LBAs): 1310720 (5GiB) 00:07:40.905 Capacity (in LBAs): 1310720 (5GiB) 00:07:40.905 Utilization (in LBAs): 1310720 (5GiB) 00:07:40.905 Thin Provisioning: Not Supported 00:07:40.905 Per-NS Atomic Units: No 00:07:40.905 Maximum Single Source Range Length: 128 00:07:40.905 Maximum Copy Length: 128 00:07:40.905 Maximum Source Range Count: 128 00:07:40.905 NGUID/EUI64 Never Reused: No 00:07:40.905 Namespace Write Protected: No 00:07:40.905 Number of LBA Formats: 8 00:07:40.905 Current LBA Format: LBA Format #04 00:07:40.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.905 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.905 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.905 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.905 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.905 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.905 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.905 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.905 00:07:40.905 NVM Specific Namespace Data 00:07:40.905 =========================== 00:07:40.905 Logical Block Storage Tag Mask: 0 00:07:40.905 Protection Information Capabilities: 00:07:40.905 16b Guard Protection Information Storage Tag Support: No 00:07:40.905 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.905 Storage Tag Check Read Support: No 00:07:40.905 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.905 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.905 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.905 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.905 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.905 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.905 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.905 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.905 ===================================================== 00:07:40.905 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:40.905 ===================================================== 00:07:40.905 Controller Capabilities/Features 00:07:40.905 ================================ 00:07:40.905 Vendor ID: 1b36 00:07:40.905 Subsystem Vendor ID: 1af4 00:07:40.905 Serial Number: 12343 00:07:40.905 Model Number: QEMU NVMe Ctrl 00:07:40.905 Firmware Version: 8.0.0 00:07:40.905 Recommended Arb Burst: 6 00:07:40.905 IEEE OUI Identifier: 00 54 52 00:07:40.905 Multi-path I/O 00:07:40.905 May have multiple subsystem ports: No 00:07:40.905 May have multiple controllers: Yes 00:07:40.905 Associated with SR-IOV VF: No 00:07:40.905 Max Data Transfer Size: 524288 00:07:40.905 Max Number of Namespaces: 256 00:07:40.905 Max Number of I/O Queues: 64 00:07:40.905 NVMe Specification Version (VS): 1.4 00:07:40.905 NVMe Specification Version (Identify): 1.4 00:07:40.905 Maximum Queue Entries: 2048 00:07:40.905 Contiguous Queues Required: Yes 00:07:40.905 Arbitration Mechanisms Supported 00:07:40.905 Weighted Round Robin: Not Supported 00:07:40.905 Vendor Specific: Not Supported 00:07:40.905 Reset Timeout: 7500 ms 00:07:40.905 Doorbell Stride: 4 bytes 00:07:40.905 NVM Subsystem Reset: Not Supported 00:07:40.905 Command Sets Supported 00:07:40.905 NVM Command Set: Supported 00:07:40.906 Boot Partition: Not Supported 00:07:40.906 Memory Page Size Minimum: 4096 bytes 00:07:40.906 Memory Page Size Maximum: 65536 bytes 00:07:40.906 Persistent Memory Region: Not Supported 00:07:40.906 Optional Asynchronous Events Supported 00:07:40.906 Namespace Attribute Notices: Supported 00:07:40.906 Firmware Activation Notices: Not Supported 00:07:40.906 ANA Change Notices: Not Supported 00:07:40.906 PLE Aggregate Log Change Notices: Not Supported 00:07:40.906 LBA Status Info Alert Notices: Not Supported 00:07:40.906 EGE Aggregate Log Change Notices: Not Supported 00:07:40.906 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.906 Zone Descriptor Change Notices: Not Supported 00:07:40.906 Discovery Log Change Notices: Not Supported 00:07:40.906 Controller Attributes 00:07:40.906 128-bit Host Identifier: Not Supported 00:07:40.906 Non-Operational Permissive Mode: Not Supported 00:07:40.906 NVM Sets: Not Supported 00:07:40.906 Read Recovery Levels: Not Supported 00:07:40.906 Endurance Groups: Supported 00:07:40.906 Predictable Latency Mode: Not Supported 00:07:40.906 Traffic Based Keep ALive: Not Supported 00:07:40.906 Namespace Granularity: Not Supported 00:07:40.906 SQ Associations: Not Supported 00:07:40.906 UUID List: Not Supported 00:07:40.906 Multi-Domain Subsystem: Not Supported 00:07:40.906 Fixed Capacity Management: Not Supported 00:07:40.906 Variable Capacity Management: Not Supported 00:07:40.906 Delete Endurance Group: Not Supported 00:07:40.906 Delete NVM Set: Not Supported 00:07:40.906 Extended LBA Formats Supported: Supported 00:07:40.906 Flexible Data Placement Supported: Supported 00:07:40.906 00:07:40.906 Controller Memory Buffer Support 00:07:40.906 ================================ 00:07:40.906 Supported: No 00:07:40.906 00:07:40.906 Persistent Memory Region Support 00:07:40.906 ================================ 00:07:40.906 Supported: No 00:07:40.906 00:07:40.906 Admin Command Set Attributes 00:07:40.906 ============================ 00:07:40.906 Security Send/Receive: Not Supported 00:07:40.906 Format NVM: Supported 00:07:40.906 Firmware Activate/Download: Not Supported 00:07:40.906 Namespace Management: Supported 00:07:40.906 Device Self-Test: Not Supported 00:07:40.906 Directives: Supported 00:07:40.906 NVMe-MI: Not Supported 00:07:40.906 Virtualization Management: Not Supported 00:07:40.906 Doorbell Buffer Config: Supported 00:07:40.906 Get LBA Status Capability: Not Supported 00:07:40.906 Command & Feature Lockdown Capability: Not Supported 00:07:40.906 Abort Command Limit: 4 00:07:40.906 Async Event Request Limit: 4 00:07:40.906 Number of Firmware Slots: N/A 00:07:40.906 Firmware Slot 1 Read-Only: N/A 00:07:40.906 Firmware Activation Without Reset: N/A 00:07:40.906 Multiple Update Detection Support: N/A 00:07:40.906 Firmware Update Granularity: No Information Provided 00:07:40.906 Per-Namespace SMART Log: Yes 00:07:40.906 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.906 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:40.906 Command Effects Log Page: Supported 00:07:40.906 Get Log Page Extended Data: Supported 00:07:40.906 Telemetry Log Pages: Not Supported 00:07:40.906 Persistent Event Log Pages: Not Supported 00:07:40.906 Supported Log Pages Log Page: May Support 00:07:40.906 Commands Supported & Effects Log Page: Not Supported 00:07:40.906 Feature Identifiers & Effects Log Page:May Support 00:07:40.906 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.906 Data Area 4 for Telemetry Log: Not Supported 00:07:40.906 Error Log Page Entries Supported: 1 00:07:40.906 Keep Alive: Not Supported 00:07:40.906 00:07:40.906 NVM Command Set Attributes 00:07:40.906 ========================== 00:07:40.906 Submission Queue Entry Size 00:07:40.906 Max: 64 00:07:40.906 Min: 64 00:07:40.906 Completion Queue Entry Size 00:07:40.906 Max: 16 00:07:40.906 Min: 16 00:07:40.906 Number of Namespaces: 256 00:07:40.906 Compare Command: Supported 00:07:40.906 Write Uncorrectable Command: Not Supported 00:07:40.906 Dataset Management Command: Supported 00:07:40.906 Write Zeroes Command: Supported 00:07:40.906 Set Features Save Field: Supported 00:07:40.906 Reservations: Not Supported 00:07:40.906 Timestamp: Supported 00:07:40.906 Copy: Supported 00:07:40.906 Volatile Write Cache: Present 00:07:40.906 Atomic Write Unit (Normal): 1 00:07:40.906 Atomic Write Unit (PFail): 1 00:07:40.906 Atomic Compare & Write Unit: 1 00:07:40.906 Fused Compare & Write: Not Supported 00:07:40.906 Scatter-Gather List 00:07:40.906 SGL Command Set: Supported 00:07:40.906 SGL Keyed: Not Supported 00:07:40.906 SGL Bit Bucket Descriptor: Not Supported 00:07:40.906 SGL Metadata Pointer: Not Supported 00:07:40.906 Oversized SGL: Not Supported 00:07:40.906 SGL Metadata Address: Not Supported 00:07:40.906 SGL Offset: Not Supported 00:07:40.906 Transport SGL Data Block: Not Supported 00:07:40.906 Replay Protected Memory Block: Not Supported 00:07:40.906 00:07:40.906 Firmware Slot Information 00:07:40.906 ========================= 00:07:40.906 Active slot: 1 00:07:40.906 Slot 1 Firmware Revision: 1.0 00:07:40.906 00:07:40.906 00:07:40.906 Commands Supported and Effects 00:07:40.906 ============================== 00:07:40.906 Admin Commands 00:07:40.906 -------------- 00:07:40.906 Delete I/O Submission Queue (00h): Supported 00:07:40.906 Create I/O Submission Queue (01h): Supported 00:07:40.906 Get Log Page (02h): Supported 00:07:40.906 Delete I/O Completion Queue (04h): Supported 00:07:40.906 Create I/O Completion Queue (05h): Supported 00:07:40.906 Identify (06h): Supported 00:07:40.906 Abort (08h): Supported 00:07:40.906 Set Features (09h): Supported 00:07:40.906 Get Features (0Ah): Supported 00:07:40.906 Asynchronous Event Request (0Ch): Supported 00:07:40.906 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.906 Directive Send (19h): Supported 00:07:40.906 Directive Receive (1Ah): Supported 00:07:40.906 Virtualization Management (1Ch): Supported 00:07:40.906 Doorbell Buffer Config (7Ch): Supported 00:07:40.906 Format NVM (80h): Supported LBA-Change 00:07:40.906 I/O Commands 00:07:40.906 ------------ 00:07:40.906 Flush (00h): Supported LBA-Change 00:07:40.906 Write (01h): Supported LBA-Change 00:07:40.906 Read (02h): Supported 00:07:40.906 Compare (05h): Supported 00:07:40.906 Write Zeroes (08h): Supported LBA-Change 00:07:40.906 Dataset Management (09h): Supported LBA-Change 00:07:40.906 Unknown (0Ch): Supported 00:07:40.906 Unknown (12h): Supported 00:07:40.906 Copy (19h): Supported LBA-Change 00:07:40.906 Unknown (1Dh): Supported LBA-Change 00:07:40.906 00:07:40.906 Error Log 00:07:40.906 ========= 00:07:40.906 00:07:40.906 Arbitration 00:07:40.906 =========== 00:07:40.906 Arbitration Burst: no limit 00:07:40.906 00:07:40.906 Power Management 00:07:40.906 ================ 00:07:40.906 Number of Power States: 1 00:07:40.906 Current Power State: Power State #0 00:07:40.906 Power State #0: 00:07:40.906 Max Power: 25.00 W 00:07:40.906 Non-Operational State: Operational 00:07:40.906 Entry Latency: 16 microseconds 00:07:40.906 Exit Latency: 4 microseconds 00:07:40.906 Relative Read Throughput: 0 00:07:40.906 Relative Read Latency: 0 00:07:40.906 Relative Write Throughput: 0 00:07:40.906 Relative Write Latency: 0 00:07:40.906 Idle Power: Not Reported 00:07:40.906 Active Power: Not Reported 00:07:40.906 Non-Operational Permissive Mode: Not Supported 00:07:40.906 00:07:40.906 Health Information 00:07:40.906 ================== 00:07:40.906 Critical Warnings: 00:07:40.906 Available Spare Space: OK 00:07:40.906 Temperature: OK 00:07:40.906 Device Reliability: OK 00:07:40.906 Read Only: No 00:07:40.906 Volatile Memory Backup: OK 00:07:40.906 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.906 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.906 Available Spare: 0% 00:07:40.906 Available Spare Threshold: 0% 00:07:40.906 Life Percentage Used: 0% 00:07:40.906 Data Units Read: 965 00:07:40.906 Data Units Written: 894 00:07:40.906 Host Read Commands: 41936 00:07:40.906 Host Write Commands: 41359 00:07:40.906 Controller Busy Time: 0 minutes 00:07:40.906 Power Cycles: 0 00:07:40.906 Power On Hours: 0 hours 00:07:40.906 Unsafe Shutdowns: 0 00:07:40.906 Unrecoverable Media Errors: 0 00:07:40.906 Lifetime Error Log Entries: 0 00:07:40.906 Warning Temperature Time: 0 minutes 00:07:40.906 Critical Temperature Time: 0 minutes 00:07:40.906 00:07:40.906 Number of Queues 00:07:40.906 ================ 00:07:40.906 Number of I/O Submission Queues: 64 00:07:40.906 Number of I/O Completion Queues: 64 00:07:40.906 00:07:40.906 ZNS Specific Controller Data 00:07:40.907 ============================ 00:07:40.907 Zone Append Size Limit: 0 00:07:40.907 00:07:40.907 00:07:40.907 Active Namespaces 00:07:40.907 ================= 00:07:40.907 Namespace ID:1 00:07:40.907 Error Recovery Timeout: Unlimited 00:07:40.907 Command Set Identifier: NVM (00h) 00:07:40.907 Deallocate: Supported 00:07:40.907 Deallocated/Unwritten Error: Supported 00:07:40.907 Deallocated Read Value: All 0x00 00:07:40.907 Deallocate in Write Zeroes: Not Supported 00:07:40.907 Deallocated Guard Field: 0xFFFF 00:07:40.907 Flush: Supported 00:07:40.907 Reservation: Not Supported 00:07:40.907 Namespace Sharing Capabilities: Multiple Controllers 00:07:40.907 Size (in LBAs): 262144 (1GiB) 00:07:40.907 Capacity (in LBAs): 262144 (1GiB) 00:07:40.907 Utilization (in LBAs): 262144 (1GiB) 00:07:40.907 Thin Provisioning: Not Supported 00:07:40.907 Per-NS Atomic Units: No 00:07:40.907 Maximum Single Source Range Length: 128 00:07:40.907 Maximum Copy Length: 128 00:07:40.907 Maximum Source Range Count: 128 00:07:40.907 NGUID/EUI64 Never Reused: No 00:07:40.907 Namespace Write Protected: No 00:07:40.907 Endurance group ID: 1 00:07:40.907 Number of LBA Formats: 8 00:07:40.907 Current LBA Format: LBA Format #04 00:07:40.907 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.907 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.907 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.907 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.907 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.907 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.907 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.907 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.907 00:07:40.907 Get Feature FDP: 00:07:40.907 ================ 00:07:40.907 Enabled: Yes 00:07:40.907 FDP configuration index: 0 00:07:40.907 00:07:40.907 FDP configurations log page 00:07:40.907 =========================== 00:07:40.907 Number of FDP configurations: 1 00:07:40.907 Version: 0 00:07:40.907 Size: 112 00:07:40.907 FDP Configuration Descriptor: 0 00:07:40.907 Descriptor Size: 96 00:07:40.907 Reclaim Group Identifier format: 2 00:07:40.907 FDP Volatile Write Cache: Not Present 00:07:40.907 FDP Configuration: Valid 00:07:40.907 Vendor Specific Size: 0 00:07:40.907 Number of Reclaim Groups: 2 00:07:40.907 Number of Recalim Unit Handles: 8 00:07:40.907 Max Placement Identifiers: 128 00:07:40.907 Number of Namespaces Suppprted: 256 00:07:40.907 Reclaim unit Nominal Size: 6000000 bytes 00:07:40.907 Estimated Reclaim Unit Time Limit: Not Reported 00:07:40.907 RUH Desc #000: RUH Type: Initially Isolated 00:07:40.907 RUH Desc #001: RUH Type: Initially Isolated 00:07:40.907 RUH Desc #002: RUH Type: Initially Isolated 00:07:40.907 RUH Desc #003: RUH Type: Initially Isolated 00:07:40.907 RUH Desc #004: RUH Type: Initially Isolated 00:07:40.907 RUH Desc #005: RUH Type: Initially Isolated 00:07:40.907 RUH Desc #006: RUH Type: Initially Isolated 00:07:40.907 RUH Desc #007: RUH Type: Initially Isolated 00:07:40.907 00:07:40.907 FDP reclaim unit handle usage log page 00:07:40.907 ====================================== 00:07:40.907 Number of Reclaim Unit Handles: 8 00:07:40.907 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:40.907 RUH Usage Desc #001: RUH Attributes: Unused 00:07:40.907 RUH Usage Desc #002: RUH Attributes: Unused 00:07:40.907 RUH Usage Desc #003: RUH Attributes: Unused 00:07:40.907 RUH Usage Desc #004: RUH Attributes: Unused 00:07:40.907 RUH Usage Desc #005: RUH Attributes: Unused 00:07:40.907 RUH Usage Desc #006: RUH Attributes: Unused 00:07:40.907 RUH Usage Desc #007: RUH Attributes: Unused 00:07:40.907 00:07:40.907 FDP statistics log page 00:07:40.907 ======================= 00:07:40.907 Host bytes with metadata written: 556572672 00:07:40.907 Medi[2024-11-18 23:51:47.492621] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62822 terminated unexpected 00:07:40.907 a bytes with metadata written: 556781568 00:07:40.907 Media bytes erased: 0 00:07:40.907 00:07:40.907 FDP events log page 00:07:40.907 =================== 00:07:40.907 Number of FDP events: 0 00:07:40.907 00:07:40.907 NVM Specific Namespace Data 00:07:40.907 =========================== 00:07:40.907 Logical Block Storage Tag Mask: 0 00:07:40.907 Protection Information Capabilities: 00:07:40.907 16b Guard Protection Information Storage Tag Support: No 00:07:40.907 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.907 Storage Tag Check Read Support: No 00:07:40.907 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.907 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.907 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.907 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.907 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.907 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.907 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.907 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.907 ===================================================== 00:07:40.907 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:40.907 ===================================================== 00:07:40.907 Controller Capabilities/Features 00:07:40.907 ================================ 00:07:40.907 Vendor ID: 1b36 00:07:40.907 Subsystem Vendor ID: 1af4 00:07:40.907 Serial Number: 12342 00:07:40.907 Model Number: QEMU NVMe Ctrl 00:07:40.907 Firmware Version: 8.0.0 00:07:40.907 Recommended Arb Burst: 6 00:07:40.907 IEEE OUI Identifier: 00 54 52 00:07:40.907 Multi-path I/O 00:07:40.907 May have multiple subsystem ports: No 00:07:40.907 May have multiple controllers: No 00:07:40.907 Associated with SR-IOV VF: No 00:07:40.907 Max Data Transfer Size: 524288 00:07:40.907 Max Number of Namespaces: 256 00:07:40.907 Max Number of I/O Queues: 64 00:07:40.907 NVMe Specification Version (VS): 1.4 00:07:40.907 NVMe Specification Version (Identify): 1.4 00:07:40.907 Maximum Queue Entries: 2048 00:07:40.907 Contiguous Queues Required: Yes 00:07:40.907 Arbitration Mechanisms Supported 00:07:40.907 Weighted Round Robin: Not Supported 00:07:40.907 Vendor Specific: Not Supported 00:07:40.907 Reset Timeout: 7500 ms 00:07:40.907 Doorbell Stride: 4 bytes 00:07:40.907 NVM Subsystem Reset: Not Supported 00:07:40.907 Command Sets Supported 00:07:40.907 NVM Command Set: Supported 00:07:40.907 Boot Partition: Not Supported 00:07:40.907 Memory Page Size Minimum: 4096 bytes 00:07:40.907 Memory Page Size Maximum: 65536 bytes 00:07:40.907 Persistent Memory Region: Not Supported 00:07:40.907 Optional Asynchronous Events Supported 00:07:40.907 Namespace Attribute Notices: Supported 00:07:40.907 Firmware Activation Notices: Not Supported 00:07:40.907 ANA Change Notices: Not Supported 00:07:40.907 PLE Aggregate Log Change Notices: Not Supported 00:07:40.907 LBA Status Info Alert Notices: Not Supported 00:07:40.907 EGE Aggregate Log Change Notices: Not Supported 00:07:40.907 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.907 Zone Descriptor Change Notices: Not Supported 00:07:40.907 Discovery Log Change Notices: Not Supported 00:07:40.907 Controller Attributes 00:07:40.907 128-bit Host Identifier: Not Supported 00:07:40.907 Non-Operational Permissive Mode: Not Supported 00:07:40.907 NVM Sets: Not Supported 00:07:40.907 Read Recovery Levels: Not Supported 00:07:40.907 Endurance Groups: Not Supported 00:07:40.907 Predictable Latency Mode: Not Supported 00:07:40.907 Traffic Based Keep ALive: Not Supported 00:07:40.907 Namespace Granularity: Not Supported 00:07:40.908 SQ Associations: Not Supported 00:07:40.908 UUID List: Not Supported 00:07:40.908 Multi-Domain Subsystem: Not Supported 00:07:40.908 Fixed Capacity Management: Not Supported 00:07:40.908 Variable Capacity Management: Not Supported 00:07:40.908 Delete Endurance Group: Not Supported 00:07:40.908 Delete NVM Set: Not Supported 00:07:40.908 Extended LBA Formats Supported: Supported 00:07:40.908 Flexible Data Placement Supported: Not Supported 00:07:40.908 00:07:40.908 Controller Memory Buffer Support 00:07:40.908 ================================ 00:07:40.908 Supported: No 00:07:40.908 00:07:40.908 Persistent Memory Region Support 00:07:40.908 ================================ 00:07:40.908 Supported: No 00:07:40.908 00:07:40.908 Admin Command Set Attributes 00:07:40.908 ============================ 00:07:40.908 Security Send/Receive: Not Supported 00:07:40.908 Format NVM: Supported 00:07:40.908 Firmware Activate/Download: Not Supported 00:07:40.908 Namespace Management: Supported 00:07:40.908 Device Self-Test: Not Supported 00:07:40.908 Directives: Supported 00:07:40.908 NVMe-MI: Not Supported 00:07:40.908 Virtualization Management: Not Supported 00:07:40.908 Doorbell Buffer Config: Supported 00:07:40.908 Get LBA Status Capability: Not Supported 00:07:40.908 Command & Feature Lockdown Capability: Not Supported 00:07:40.908 Abort Command Limit: 4 00:07:40.908 Async Event Request Limit: 4 00:07:40.908 Number of Firmware Slots: N/A 00:07:40.908 Firmware Slot 1 Read-Only: N/A 00:07:40.908 Firmware Activation Without Reset: N/A 00:07:40.908 Multiple Update Detection Support: N/A 00:07:40.908 Firmware Update Granularity: No Information Provided 00:07:40.908 Per-Namespace SMART Log: Yes 00:07:40.908 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.908 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:40.908 Command Effects Log Page: Supported 00:07:40.908 Get Log Page Extended Data: Supported 00:07:40.908 Telemetry Log Pages: Not Supported 00:07:40.908 Persistent Event Log Pages: Not Supported 00:07:40.908 Supported Log Pages Log Page: May Support 00:07:40.908 Commands Supported & Effects Log Page: Not Supported 00:07:40.908 Feature Identifiers & Effects Log Page:May Support 00:07:40.908 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.908 Data Area 4 for Telemetry Log: Not Supported 00:07:40.908 Error Log Page Entries Supported: 1 00:07:40.908 Keep Alive: Not Supported 00:07:40.908 00:07:40.908 NVM Command Set Attributes 00:07:40.908 ========================== 00:07:40.908 Submission Queue Entry Size 00:07:40.908 Max: 64 00:07:40.908 Min: 64 00:07:40.908 Completion Queue Entry Size 00:07:40.908 Max: 16 00:07:40.908 Min: 16 00:07:40.908 Number of Namespaces: 256 00:07:40.908 Compare Command: Supported 00:07:40.908 Write Uncorrectable Command: Not Supported 00:07:40.908 Dataset Management Command: Supported 00:07:40.908 Write Zeroes Command: Supported 00:07:40.908 Set Features Save Field: Supported 00:07:40.908 Reservations: Not Supported 00:07:40.908 Timestamp: Supported 00:07:40.908 Copy: Supported 00:07:40.908 Volatile Write Cache: Present 00:07:40.908 Atomic Write Unit (Normal): 1 00:07:40.908 Atomic Write Unit (PFail): 1 00:07:40.908 Atomic Compare & Write Unit: 1 00:07:40.908 Fused Compare & Write: Not Supported 00:07:40.908 Scatter-Gather List 00:07:40.908 SGL Command Set: Supported 00:07:40.908 SGL Keyed: Not Supported 00:07:40.908 SGL Bit Bucket Descriptor: Not Supported 00:07:40.908 SGL Metadata Pointer: Not Supported 00:07:40.908 Oversized SGL: Not Supported 00:07:40.908 SGL Metadata Address: Not Supported 00:07:40.908 SGL Offset: Not Supported 00:07:40.908 Transport SGL Data Block: Not Supported 00:07:40.908 Replay Protected Memory Block: Not Supported 00:07:40.908 00:07:40.908 Firmware Slot Information 00:07:40.908 ========================= 00:07:40.908 Active slot: 1 00:07:40.908 Slot 1 Firmware Revision: 1.0 00:07:40.908 00:07:40.908 00:07:40.908 Commands Supported and Effects 00:07:40.908 ============================== 00:07:40.908 Admin Commands 00:07:40.908 -------------- 00:07:40.908 Delete I/O Submission Queue (00h): Supported 00:07:40.908 Create I/O Submission Queue (01h): Supported 00:07:40.908 Get Log Page (02h): Supported 00:07:40.908 Delete I/O Completion Queue (04h): Supported 00:07:40.908 Create I/O Completion Queue (05h): Supported 00:07:40.908 Identify (06h): Supported 00:07:40.908 Abort (08h): Supported 00:07:40.908 Set Features (09h): Supported 00:07:40.908 Get Features (0Ah): Supported 00:07:40.908 Asynchronous Event Request (0Ch): Supported 00:07:40.908 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.908 Directive Send (19h): Supported 00:07:40.908 Directive Receive (1Ah): Supported 00:07:40.908 Virtualization Management (1Ch): Supported 00:07:40.908 Doorbell Buffer Config (7Ch): Supported 00:07:40.908 Format NVM (80h): Supported LBA-Change 00:07:40.908 I/O Commands 00:07:40.908 ------------ 00:07:40.908 Flush (00h): Supported LBA-Change 00:07:40.908 Write (01h): Supported LBA-Change 00:07:40.908 Read (02h): Supported 00:07:40.908 Compare (05h): Supported 00:07:40.908 Write Zeroes (08h): Supported LBA-Change 00:07:40.908 Dataset Management (09h): Supported LBA-Change 00:07:40.908 Unknown (0Ch): Supported 00:07:40.908 Unknown (12h): Supported 00:07:40.908 Copy (19h): Supported LBA-Change 00:07:40.908 Unknown (1Dh): Supported LBA-Change 00:07:40.908 00:07:40.908 Error Log 00:07:40.908 ========= 00:07:40.908 00:07:40.908 Arbitration 00:07:40.908 =========== 00:07:40.908 Arbitration Burst: no limit 00:07:40.908 00:07:40.908 Power Management 00:07:40.908 ================ 00:07:40.908 Number of Power States: 1 00:07:40.908 Current Power State: Power State #0 00:07:40.908 Power State #0: 00:07:40.908 Max Power: 25.00 W 00:07:40.908 Non-Operational State: Operational 00:07:40.908 Entry Latency: 16 microseconds 00:07:40.908 Exit Latency: 4 microseconds 00:07:40.908 Relative Read Throughput: 0 00:07:40.908 Relative Read Latency: 0 00:07:40.908 Relative Write Throughput: 0 00:07:40.908 Relative Write Latency: 0 00:07:40.908 Idle Power: Not Reported 00:07:40.908 Active Power: Not Reported 00:07:40.908 Non-Operational Permissive Mode: Not Supported 00:07:40.908 00:07:40.908 Health Information 00:07:40.908 ================== 00:07:40.908 Critical Warnings: 00:07:40.908 Available Spare Space: OK 00:07:40.908 Temperature: OK 00:07:40.908 Device Reliability: OK 00:07:40.908 Read Only: No 00:07:40.908 Volatile Memory Backup: OK 00:07:40.908 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.908 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.908 Available Spare: 0% 00:07:40.908 Available Spare Threshold: 0% 00:07:40.908 Life Percentage Used: 0% 00:07:40.908 Data Units Read: 2146 00:07:40.908 Data Units Written: 1933 00:07:40.908 Host Read Commands: 119547 00:07:40.908 Host Write Commands: 117816 00:07:40.908 Controller Busy Time: 0 minutes 00:07:40.908 Power Cycles: 0 00:07:40.908 Power On Hours: 0 hours 00:07:40.908 Unsafe Shutdowns: 0 00:07:40.908 Unrecoverable Media Errors: 0 00:07:40.908 Lifetime Error Log Entries: 0 00:07:40.908 Warning Temperature Time: 0 minutes 00:07:40.908 Critical Temperature Time: 0 minutes 00:07:40.908 00:07:40.908 Number of Queues 00:07:40.908 ================ 00:07:40.908 Number of I/O Submission Queues: 64 00:07:40.908 Number of I/O Completion Queues: 64 00:07:40.908 00:07:40.908 ZNS Specific Controller Data 00:07:40.908 ============================ 00:07:40.908 Zone Append Size Limit: 0 00:07:40.908 00:07:40.908 00:07:40.908 Active Namespaces 00:07:40.908 ================= 00:07:40.908 Namespace ID:1 00:07:40.908 Error Recovery Timeout: Unlimited 00:07:40.908 Command Set Identifier: NVM (00h) 00:07:40.908 Deallocate: Supported 00:07:40.908 Deallocated/Unwritten Error: Supported 00:07:40.908 Deallocated Read Value: All 0x00 00:07:40.908 Deallocate in Write Zeroes: Not Supported 00:07:40.908 Deallocated Guard Field: 0xFFFF 00:07:40.908 Flush: Supported 00:07:40.908 Reservation: Not Supported 00:07:40.908 Namespace Sharing Capabilities: Private 00:07:40.908 Size (in LBAs): 1048576 (4GiB) 00:07:40.908 Capacity (in LBAs): 1048576 (4GiB) 00:07:40.908 Utilization (in LBAs): 1048576 (4GiB) 00:07:40.908 Thin Provisioning: Not Supported 00:07:40.908 Per-NS Atomic Units: No 00:07:40.908 Maximum Single Source Range Length: 128 00:07:40.909 Maximum Copy Length: 128 00:07:40.909 Maximum Source Range Count: 128 00:07:40.909 NGUID/EUI64 Never Reused: No 00:07:40.909 Namespace Write Protected: No 00:07:40.909 Number of LBA Formats: 8 00:07:40.909 Current LBA Format: LBA Format #04 00:07:40.909 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.909 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.909 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.909 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.909 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.909 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.909 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.909 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.909 00:07:40.909 NVM Specific Namespace Data 00:07:40.909 =========================== 00:07:40.909 Logical Block Storage Tag Mask: 0 00:07:40.909 Protection Information Capabilities: 00:07:40.909 16b Guard Protection Information Storage Tag Support: No 00:07:40.909 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.909 Storage Tag Check Read Support: No 00:07:40.909 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Namespace ID:2 00:07:40.909 Error Recovery Timeout: Unlimited 00:07:40.909 Command Set Identifier: NVM (00h) 00:07:40.909 Deallocate: Supported 00:07:40.909 Deallocated/Unwritten Error: Supported 00:07:40.909 Deallocated Read Value: All 0x00 00:07:40.909 Deallocate in Write Zeroes: Not Supported 00:07:40.909 Deallocated Guard Field: 0xFFFF 00:07:40.909 Flush: Supported 00:07:40.909 Reservation: Not Supported 00:07:40.909 Namespace Sharing Capabilities: Private 00:07:40.909 Size (in LBAs): 1048576 (4GiB) 00:07:40.909 Capacity (in LBAs): 1048576 (4GiB) 00:07:40.909 Utilization (in LBAs): 1048576 (4GiB) 00:07:40.909 Thin Provisioning: Not Supported 00:07:40.909 Per-NS Atomic Units: No 00:07:40.909 Maximum Single Source Range Length: 128 00:07:40.909 Maximum Copy Length: 128 00:07:40.909 Maximum Source Range Count: 128 00:07:40.909 NGUID/EUI64 Never Reused: No 00:07:40.909 Namespace Write Protected: No 00:07:40.909 Number of LBA Formats: 8 00:07:40.909 Current LBA Format: LBA Format #04 00:07:40.909 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.909 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.909 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.909 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.909 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.909 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.909 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.909 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.909 00:07:40.909 NVM Specific Namespace Data 00:07:40.909 =========================== 00:07:40.909 Logical Block Storage Tag Mask: 0 00:07:40.909 Protection Information Capabilities: 00:07:40.909 16b Guard Protection Information Storage Tag Support: No 00:07:40.909 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.909 Storage Tag Check Read Support: No 00:07:40.909 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Namespace ID:3 00:07:40.909 Error Recovery Timeout: Unlimited 00:07:40.909 Command Set Identifier: NVM (00h) 00:07:40.909 Deallocate: Supported 00:07:40.909 Deallocated/Unwritten Error: Supported 00:07:40.909 Deallocated Read Value: All 0x00 00:07:40.909 Deallocate in Write Zeroes: Not Supported 00:07:40.909 Deallocated Guard Field: 0xFFFF 00:07:40.909 Flush: Supported 00:07:40.909 Reservation: Not Supported 00:07:40.909 Namespace Sharing Capabilities: Private 00:07:40.909 Size (in LBAs): 1048576 (4GiB) 00:07:40.909 Capacity (in LBAs): 1048576 (4GiB) 00:07:40.909 Utilization (in LBAs): 1048576 (4GiB) 00:07:40.909 Thin Provisioning: Not Supported 00:07:40.909 Per-NS Atomic Units: No 00:07:40.909 Maximum Single Source Range Length: 128 00:07:40.909 Maximum Copy Length: 128 00:07:40.909 Maximum Source Range Count: 128 00:07:40.909 NGUID/EUI64 Never Reused: No 00:07:40.909 Namespace Write Protected: No 00:07:40.909 Number of LBA Formats: 8 00:07:40.909 Current LBA Format: LBA Format #04 00:07:40.909 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.909 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.909 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.909 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.909 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.909 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.909 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.909 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.909 00:07:40.909 NVM Specific Namespace Data 00:07:40.909 =========================== 00:07:40.909 Logical Block Storage Tag Mask: 0 00:07:40.909 Protection Information Capabilities: 00:07:40.909 16b Guard Protection Information Storage Tag Support: No 00:07:40.909 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.909 Storage Tag Check Read Support: No 00:07:40.909 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.909 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:40.909 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:41.168 ===================================================== 00:07:41.168 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:41.168 ===================================================== 00:07:41.168 Controller Capabilities/Features 00:07:41.168 ================================ 00:07:41.168 Vendor ID: 1b36 00:07:41.168 Subsystem Vendor ID: 1af4 00:07:41.168 Serial Number: 12340 00:07:41.168 Model Number: QEMU NVMe Ctrl 00:07:41.168 Firmware Version: 8.0.0 00:07:41.168 Recommended Arb Burst: 6 00:07:41.168 IEEE OUI Identifier: 00 54 52 00:07:41.168 Multi-path I/O 00:07:41.168 May have multiple subsystem ports: No 00:07:41.168 May have multiple controllers: No 00:07:41.168 Associated with SR-IOV VF: No 00:07:41.168 Max Data Transfer Size: 524288 00:07:41.168 Max Number of Namespaces: 256 00:07:41.168 Max Number of I/O Queues: 64 00:07:41.168 NVMe Specification Version (VS): 1.4 00:07:41.168 NVMe Specification Version (Identify): 1.4 00:07:41.168 Maximum Queue Entries: 2048 00:07:41.168 Contiguous Queues Required: Yes 00:07:41.168 Arbitration Mechanisms Supported 00:07:41.168 Weighted Round Robin: Not Supported 00:07:41.168 Vendor Specific: Not Supported 00:07:41.168 Reset Timeout: 7500 ms 00:07:41.168 Doorbell Stride: 4 bytes 00:07:41.168 NVM Subsystem Reset: Not Supported 00:07:41.168 Command Sets Supported 00:07:41.168 NVM Command Set: Supported 00:07:41.168 Boot Partition: Not Supported 00:07:41.168 Memory Page Size Minimum: 4096 bytes 00:07:41.168 Memory Page Size Maximum: 65536 bytes 00:07:41.168 Persistent Memory Region: Not Supported 00:07:41.168 Optional Asynchronous Events Supported 00:07:41.168 Namespace Attribute Notices: Supported 00:07:41.168 Firmware Activation Notices: Not Supported 00:07:41.168 ANA Change Notices: Not Supported 00:07:41.168 PLE Aggregate Log Change Notices: Not Supported 00:07:41.168 LBA Status Info Alert Notices: Not Supported 00:07:41.168 EGE Aggregate Log Change Notices: Not Supported 00:07:41.168 Normal NVM Subsystem Shutdown event: Not Supported 00:07:41.168 Zone Descriptor Change Notices: Not Supported 00:07:41.168 Discovery Log Change Notices: Not Supported 00:07:41.168 Controller Attributes 00:07:41.168 128-bit Host Identifier: Not Supported 00:07:41.168 Non-Operational Permissive Mode: Not Supported 00:07:41.168 NVM Sets: Not Supported 00:07:41.168 Read Recovery Levels: Not Supported 00:07:41.168 Endurance Groups: Not Supported 00:07:41.168 Predictable Latency Mode: Not Supported 00:07:41.168 Traffic Based Keep ALive: Not Supported 00:07:41.168 Namespace Granularity: Not Supported 00:07:41.168 SQ Associations: Not Supported 00:07:41.168 UUID List: Not Supported 00:07:41.168 Multi-Domain Subsystem: Not Supported 00:07:41.168 Fixed Capacity Management: Not Supported 00:07:41.168 Variable Capacity Management: Not Supported 00:07:41.168 Delete Endurance Group: Not Supported 00:07:41.168 Delete NVM Set: Not Supported 00:07:41.168 Extended LBA Formats Supported: Supported 00:07:41.168 Flexible Data Placement Supported: Not Supported 00:07:41.168 00:07:41.168 Controller Memory Buffer Support 00:07:41.168 ================================ 00:07:41.168 Supported: No 00:07:41.168 00:07:41.168 Persistent Memory Region Support 00:07:41.168 ================================ 00:07:41.168 Supported: No 00:07:41.168 00:07:41.168 Admin Command Set Attributes 00:07:41.168 ============================ 00:07:41.168 Security Send/Receive: Not Supported 00:07:41.168 Format NVM: Supported 00:07:41.168 Firmware Activate/Download: Not Supported 00:07:41.168 Namespace Management: Supported 00:07:41.168 Device Self-Test: Not Supported 00:07:41.168 Directives: Supported 00:07:41.168 NVMe-MI: Not Supported 00:07:41.168 Virtualization Management: Not Supported 00:07:41.168 Doorbell Buffer Config: Supported 00:07:41.168 Get LBA Status Capability: Not Supported 00:07:41.168 Command & Feature Lockdown Capability: Not Supported 00:07:41.168 Abort Command Limit: 4 00:07:41.168 Async Event Request Limit: 4 00:07:41.168 Number of Firmware Slots: N/A 00:07:41.168 Firmware Slot 1 Read-Only: N/A 00:07:41.168 Firmware Activation Without Reset: N/A 00:07:41.168 Multiple Update Detection Support: N/A 00:07:41.168 Firmware Update Granularity: No Information Provided 00:07:41.168 Per-Namespace SMART Log: Yes 00:07:41.168 Asymmetric Namespace Access Log Page: Not Supported 00:07:41.168 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:41.168 Command Effects Log Page: Supported 00:07:41.168 Get Log Page Extended Data: Supported 00:07:41.168 Telemetry Log Pages: Not Supported 00:07:41.168 Persistent Event Log Pages: Not Supported 00:07:41.168 Supported Log Pages Log Page: May Support 00:07:41.168 Commands Supported & Effects Log Page: Not Supported 00:07:41.168 Feature Identifiers & Effects Log Page:May Support 00:07:41.168 NVMe-MI Commands & Effects Log Page: May Support 00:07:41.168 Data Area 4 for Telemetry Log: Not Supported 00:07:41.168 Error Log Page Entries Supported: 1 00:07:41.168 Keep Alive: Not Supported 00:07:41.168 00:07:41.168 NVM Command Set Attributes 00:07:41.168 ========================== 00:07:41.168 Submission Queue Entry Size 00:07:41.168 Max: 64 00:07:41.168 Min: 64 00:07:41.168 Completion Queue Entry Size 00:07:41.168 Max: 16 00:07:41.168 Min: 16 00:07:41.168 Number of Namespaces: 256 00:07:41.168 Compare Command: Supported 00:07:41.168 Write Uncorrectable Command: Not Supported 00:07:41.168 Dataset Management Command: Supported 00:07:41.168 Write Zeroes Command: Supported 00:07:41.168 Set Features Save Field: Supported 00:07:41.168 Reservations: Not Supported 00:07:41.168 Timestamp: Supported 00:07:41.168 Copy: Supported 00:07:41.168 Volatile Write Cache: Present 00:07:41.168 Atomic Write Unit (Normal): 1 00:07:41.168 Atomic Write Unit (PFail): 1 00:07:41.168 Atomic Compare & Write Unit: 1 00:07:41.168 Fused Compare & Write: Not Supported 00:07:41.168 Scatter-Gather List 00:07:41.168 SGL Command Set: Supported 00:07:41.168 SGL Keyed: Not Supported 00:07:41.168 SGL Bit Bucket Descriptor: Not Supported 00:07:41.168 SGL Metadata Pointer: Not Supported 00:07:41.168 Oversized SGL: Not Supported 00:07:41.168 SGL Metadata Address: Not Supported 00:07:41.168 SGL Offset: Not Supported 00:07:41.168 Transport SGL Data Block: Not Supported 00:07:41.168 Replay Protected Memory Block: Not Supported 00:07:41.168 00:07:41.168 Firmware Slot Information 00:07:41.168 ========================= 00:07:41.168 Active slot: 1 00:07:41.168 Slot 1 Firmware Revision: 1.0 00:07:41.168 00:07:41.168 00:07:41.168 Commands Supported and Effects 00:07:41.168 ============================== 00:07:41.168 Admin Commands 00:07:41.168 -------------- 00:07:41.168 Delete I/O Submission Queue (00h): Supported 00:07:41.168 Create I/O Submission Queue (01h): Supported 00:07:41.168 Get Log Page (02h): Supported 00:07:41.168 Delete I/O Completion Queue (04h): Supported 00:07:41.168 Create I/O Completion Queue (05h): Supported 00:07:41.168 Identify (06h): Supported 00:07:41.168 Abort (08h): Supported 00:07:41.168 Set Features (09h): Supported 00:07:41.168 Get Features (0Ah): Supported 00:07:41.168 Asynchronous Event Request (0Ch): Supported 00:07:41.168 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:41.168 Directive Send (19h): Supported 00:07:41.168 Directive Receive (1Ah): Supported 00:07:41.168 Virtualization Management (1Ch): Supported 00:07:41.168 Doorbell Buffer Config (7Ch): Supported 00:07:41.169 Format NVM (80h): Supported LBA-Change 00:07:41.169 I/O Commands 00:07:41.169 ------------ 00:07:41.169 Flush (00h): Supported LBA-Change 00:07:41.169 Write (01h): Supported LBA-Change 00:07:41.169 Read (02h): Supported 00:07:41.169 Compare (05h): Supported 00:07:41.169 Write Zeroes (08h): Supported LBA-Change 00:07:41.169 Dataset Management (09h): Supported LBA-Change 00:07:41.169 Unknown (0Ch): Supported 00:07:41.169 Unknown (12h): Supported 00:07:41.169 Copy (19h): Supported LBA-Change 00:07:41.169 Unknown (1Dh): Supported LBA-Change 00:07:41.169 00:07:41.169 Error Log 00:07:41.169 ========= 00:07:41.169 00:07:41.169 Arbitration 00:07:41.169 =========== 00:07:41.169 Arbitration Burst: no limit 00:07:41.169 00:07:41.169 Power Management 00:07:41.169 ================ 00:07:41.169 Number of Power States: 1 00:07:41.169 Current Power State: Power State #0 00:07:41.169 Power State #0: 00:07:41.169 Max Power: 25.00 W 00:07:41.169 Non-Operational State: Operational 00:07:41.169 Entry Latency: 16 microseconds 00:07:41.169 Exit Latency: 4 microseconds 00:07:41.169 Relative Read Throughput: 0 00:07:41.169 Relative Read Latency: 0 00:07:41.169 Relative Write Throughput: 0 00:07:41.169 Relative Write Latency: 0 00:07:41.169 Idle Power: Not Reported 00:07:41.169 Active Power: Not Reported 00:07:41.169 Non-Operational Permissive Mode: Not Supported 00:07:41.169 00:07:41.169 Health Information 00:07:41.169 ================== 00:07:41.169 Critical Warnings: 00:07:41.169 Available Spare Space: OK 00:07:41.169 Temperature: OK 00:07:41.169 Device Reliability: OK 00:07:41.169 Read Only: No 00:07:41.169 Volatile Memory Backup: OK 00:07:41.169 Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.169 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:41.169 Available Spare: 0% 00:07:41.169 Available Spare Threshold: 0% 00:07:41.169 Life Percentage Used: 0% 00:07:41.169 Data Units Read: 647 00:07:41.169 Data Units Written: 576 00:07:41.169 Host Read Commands: 39037 00:07:41.169 Host Write Commands: 38823 00:07:41.169 Controller Busy Time: 0 minutes 00:07:41.169 Power Cycles: 0 00:07:41.169 Power On Hours: 0 hours 00:07:41.169 Unsafe Shutdowns: 0 00:07:41.169 Unrecoverable Media Errors: 0 00:07:41.169 Lifetime Error Log Entries: 0 00:07:41.169 Warning Temperature Time: 0 minutes 00:07:41.169 Critical Temperature Time: 0 minutes 00:07:41.169 00:07:41.169 Number of Queues 00:07:41.169 ================ 00:07:41.169 Number of I/O Submission Queues: 64 00:07:41.169 Number of I/O Completion Queues: 64 00:07:41.169 00:07:41.169 ZNS Specific Controller Data 00:07:41.169 ============================ 00:07:41.169 Zone Append Size Limit: 0 00:07:41.169 00:07:41.169 00:07:41.169 Active Namespaces 00:07:41.169 ================= 00:07:41.169 Namespace ID:1 00:07:41.169 Error Recovery Timeout: Unlimited 00:07:41.169 Command Set Identifier: NVM (00h) 00:07:41.169 Deallocate: Supported 00:07:41.169 Deallocated/Unwritten Error: Supported 00:07:41.169 Deallocated Read Value: All 0x00 00:07:41.169 Deallocate in Write Zeroes: Not Supported 00:07:41.169 Deallocated Guard Field: 0xFFFF 00:07:41.169 Flush: Supported 00:07:41.169 Reservation: Not Supported 00:07:41.169 Metadata Transferred as: Separate Metadata Buffer 00:07:41.169 Namespace Sharing Capabilities: Private 00:07:41.169 Size (in LBAs): 1548666 (5GiB) 00:07:41.169 Capacity (in LBAs): 1548666 (5GiB) 00:07:41.169 Utilization (in LBAs): 1548666 (5GiB) 00:07:41.169 Thin Provisioning: Not Supported 00:07:41.169 Per-NS Atomic Units: No 00:07:41.169 Maximum Single Source Range Length: 128 00:07:41.169 Maximum Copy Length: 128 00:07:41.169 Maximum Source Range Count: 128 00:07:41.169 NGUID/EUI64 Never Reused: No 00:07:41.169 Namespace Write Protected: No 00:07:41.169 Number of LBA Formats: 8 00:07:41.169 Current LBA Format: LBA Format #07 00:07:41.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:41.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:41.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:41.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:41.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:41.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:41.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:41.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:41.169 00:07:41.169 NVM Specific Namespace Data 00:07:41.169 =========================== 00:07:41.169 Logical Block Storage Tag Mask: 0 00:07:41.169 Protection Information Capabilities: 00:07:41.169 16b Guard Protection Information Storage Tag Support: No 00:07:41.169 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:41.169 Storage Tag Check Read Support: No 00:07:41.169 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.169 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.169 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.169 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:41.169 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:41.428 ===================================================== 00:07:41.428 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:41.428 ===================================================== 00:07:41.428 Controller Capabilities/Features 00:07:41.428 ================================ 00:07:41.428 Vendor ID: 1b36 00:07:41.428 Subsystem Vendor ID: 1af4 00:07:41.428 Serial Number: 12341 00:07:41.428 Model Number: QEMU NVMe Ctrl 00:07:41.428 Firmware Version: 8.0.0 00:07:41.428 Recommended Arb Burst: 6 00:07:41.428 IEEE OUI Identifier: 00 54 52 00:07:41.428 Multi-path I/O 00:07:41.428 May have multiple subsystem ports: No 00:07:41.428 May have multiple controllers: No 00:07:41.428 Associated with SR-IOV VF: No 00:07:41.428 Max Data Transfer Size: 524288 00:07:41.428 Max Number of Namespaces: 256 00:07:41.428 Max Number of I/O Queues: 64 00:07:41.428 NVMe Specification Version (VS): 1.4 00:07:41.428 NVMe Specification Version (Identify): 1.4 00:07:41.428 Maximum Queue Entries: 2048 00:07:41.428 Contiguous Queues Required: Yes 00:07:41.428 Arbitration Mechanisms Supported 00:07:41.428 Weighted Round Robin: Not Supported 00:07:41.428 Vendor Specific: Not Supported 00:07:41.428 Reset Timeout: 7500 ms 00:07:41.428 Doorbell Stride: 4 bytes 00:07:41.428 NVM Subsystem Reset: Not Supported 00:07:41.428 Command Sets Supported 00:07:41.428 NVM Command Set: Supported 00:07:41.428 Boot Partition: Not Supported 00:07:41.428 Memory Page Size Minimum: 4096 bytes 00:07:41.428 Memory Page Size Maximum: 65536 bytes 00:07:41.428 Persistent Memory Region: Not Supported 00:07:41.428 Optional Asynchronous Events Supported 00:07:41.428 Namespace Attribute Notices: Supported 00:07:41.428 Firmware Activation Notices: Not Supported 00:07:41.428 ANA Change Notices: Not Supported 00:07:41.428 PLE Aggregate Log Change Notices: Not Supported 00:07:41.428 LBA Status Info Alert Notices: Not Supported 00:07:41.428 EGE Aggregate Log Change Notices: Not Supported 00:07:41.428 Normal NVM Subsystem Shutdown event: Not Supported 00:07:41.428 Zone Descriptor Change Notices: Not Supported 00:07:41.428 Discovery Log Change Notices: Not Supported 00:07:41.428 Controller Attributes 00:07:41.428 128-bit Host Identifier: Not Supported 00:07:41.428 Non-Operational Permissive Mode: Not Supported 00:07:41.428 NVM Sets: Not Supported 00:07:41.428 Read Recovery Levels: Not Supported 00:07:41.428 Endurance Groups: Not Supported 00:07:41.428 Predictable Latency Mode: Not Supported 00:07:41.428 Traffic Based Keep ALive: Not Supported 00:07:41.428 Namespace Granularity: Not Supported 00:07:41.428 SQ Associations: Not Supported 00:07:41.428 UUID List: Not Supported 00:07:41.428 Multi-Domain Subsystem: Not Supported 00:07:41.428 Fixed Capacity Management: Not Supported 00:07:41.428 Variable Capacity Management: Not Supported 00:07:41.428 Delete Endurance Group: Not Supported 00:07:41.428 Delete NVM Set: Not Supported 00:07:41.428 Extended LBA Formats Supported: Supported 00:07:41.428 Flexible Data Placement Supported: Not Supported 00:07:41.428 00:07:41.428 Controller Memory Buffer Support 00:07:41.428 ================================ 00:07:41.428 Supported: No 00:07:41.428 00:07:41.428 Persistent Memory Region Support 00:07:41.428 ================================ 00:07:41.428 Supported: No 00:07:41.428 00:07:41.428 Admin Command Set Attributes 00:07:41.428 ============================ 00:07:41.428 Security Send/Receive: Not Supported 00:07:41.428 Format NVM: Supported 00:07:41.428 Firmware Activate/Download: Not Supported 00:07:41.428 Namespace Management: Supported 00:07:41.428 Device Self-Test: Not Supported 00:07:41.428 Directives: Supported 00:07:41.428 NVMe-MI: Not Supported 00:07:41.428 Virtualization Management: Not Supported 00:07:41.428 Doorbell Buffer Config: Supported 00:07:41.428 Get LBA Status Capability: Not Supported 00:07:41.428 Command & Feature Lockdown Capability: Not Supported 00:07:41.428 Abort Command Limit: 4 00:07:41.428 Async Event Request Limit: 4 00:07:41.428 Number of Firmware Slots: N/A 00:07:41.428 Firmware Slot 1 Read-Only: N/A 00:07:41.428 Firmware Activation Without Reset: N/A 00:07:41.428 Multiple Update Detection Support: N/A 00:07:41.428 Firmware Update Granularity: No Information Provided 00:07:41.428 Per-Namespace SMART Log: Yes 00:07:41.428 Asymmetric Namespace Access Log Page: Not Supported 00:07:41.428 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:41.428 Command Effects Log Page: Supported 00:07:41.428 Get Log Page Extended Data: Supported 00:07:41.428 Telemetry Log Pages: Not Supported 00:07:41.428 Persistent Event Log Pages: Not Supported 00:07:41.428 Supported Log Pages Log Page: May Support 00:07:41.428 Commands Supported & Effects Log Page: Not Supported 00:07:41.428 Feature Identifiers & Effects Log Page:May Support 00:07:41.428 NVMe-MI Commands & Effects Log Page: May Support 00:07:41.428 Data Area 4 for Telemetry Log: Not Supported 00:07:41.428 Error Log Page Entries Supported: 1 00:07:41.428 Keep Alive: Not Supported 00:07:41.428 00:07:41.428 NVM Command Set Attributes 00:07:41.428 ========================== 00:07:41.428 Submission Queue Entry Size 00:07:41.428 Max: 64 00:07:41.428 Min: 64 00:07:41.428 Completion Queue Entry Size 00:07:41.428 Max: 16 00:07:41.428 Min: 16 00:07:41.428 Number of Namespaces: 256 00:07:41.428 Compare Command: Supported 00:07:41.428 Write Uncorrectable Command: Not Supported 00:07:41.428 Dataset Management Command: Supported 00:07:41.428 Write Zeroes Command: Supported 00:07:41.428 Set Features Save Field: Supported 00:07:41.428 Reservations: Not Supported 00:07:41.428 Timestamp: Supported 00:07:41.428 Copy: Supported 00:07:41.428 Volatile Write Cache: Present 00:07:41.428 Atomic Write Unit (Normal): 1 00:07:41.428 Atomic Write Unit (PFail): 1 00:07:41.428 Atomic Compare & Write Unit: 1 00:07:41.428 Fused Compare & Write: Not Supported 00:07:41.428 Scatter-Gather List 00:07:41.428 SGL Command Set: Supported 00:07:41.428 SGL Keyed: Not Supported 00:07:41.428 SGL Bit Bucket Descriptor: Not Supported 00:07:41.428 SGL Metadata Pointer: Not Supported 00:07:41.428 Oversized SGL: Not Supported 00:07:41.428 SGL Metadata Address: Not Supported 00:07:41.428 SGL Offset: Not Supported 00:07:41.428 Transport SGL Data Block: Not Supported 00:07:41.428 Replay Protected Memory Block: Not Supported 00:07:41.428 00:07:41.428 Firmware Slot Information 00:07:41.429 ========================= 00:07:41.429 Active slot: 1 00:07:41.429 Slot 1 Firmware Revision: 1.0 00:07:41.429 00:07:41.429 00:07:41.429 Commands Supported and Effects 00:07:41.429 ============================== 00:07:41.429 Admin Commands 00:07:41.429 -------------- 00:07:41.429 Delete I/O Submission Queue (00h): Supported 00:07:41.429 Create I/O Submission Queue (01h): Supported 00:07:41.429 Get Log Page (02h): Supported 00:07:41.429 Delete I/O Completion Queue (04h): Supported 00:07:41.429 Create I/O Completion Queue (05h): Supported 00:07:41.429 Identify (06h): Supported 00:07:41.429 Abort (08h): Supported 00:07:41.429 Set Features (09h): Supported 00:07:41.429 Get Features (0Ah): Supported 00:07:41.429 Asynchronous Event Request (0Ch): Supported 00:07:41.429 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:41.429 Directive Send (19h): Supported 00:07:41.429 Directive Receive (1Ah): Supported 00:07:41.429 Virtualization Management (1Ch): Supported 00:07:41.429 Doorbell Buffer Config (7Ch): Supported 00:07:41.429 Format NVM (80h): Supported LBA-Change 00:07:41.429 I/O Commands 00:07:41.429 ------------ 00:07:41.429 Flush (00h): Supported LBA-Change 00:07:41.429 Write (01h): Supported LBA-Change 00:07:41.429 Read (02h): Supported 00:07:41.429 Compare (05h): Supported 00:07:41.429 Write Zeroes (08h): Supported LBA-Change 00:07:41.429 Dataset Management (09h): Supported LBA-Change 00:07:41.429 Unknown (0Ch): Supported 00:07:41.429 Unknown (12h): Supported 00:07:41.429 Copy (19h): Supported LBA-Change 00:07:41.429 Unknown (1Dh): Supported LBA-Change 00:07:41.429 00:07:41.429 Error Log 00:07:41.429 ========= 00:07:41.429 00:07:41.429 Arbitration 00:07:41.429 =========== 00:07:41.429 Arbitration Burst: no limit 00:07:41.429 00:07:41.429 Power Management 00:07:41.429 ================ 00:07:41.429 Number of Power States: 1 00:07:41.429 Current Power State: Power State #0 00:07:41.429 Power State #0: 00:07:41.429 Max Power: 25.00 W 00:07:41.429 Non-Operational State: Operational 00:07:41.429 Entry Latency: 16 microseconds 00:07:41.429 Exit Latency: 4 microseconds 00:07:41.429 Relative Read Throughput: 0 00:07:41.429 Relative Read Latency: 0 00:07:41.429 Relative Write Throughput: 0 00:07:41.429 Relative Write Latency: 0 00:07:41.429 Idle Power: Not Reported 00:07:41.429 Active Power: Not Reported 00:07:41.429 Non-Operational Permissive Mode: Not Supported 00:07:41.429 00:07:41.429 Health Information 00:07:41.429 ================== 00:07:41.429 Critical Warnings: 00:07:41.429 Available Spare Space: OK 00:07:41.429 Temperature: OK 00:07:41.429 Device Reliability: OK 00:07:41.429 Read Only: No 00:07:41.429 Volatile Memory Backup: OK 00:07:41.429 Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.429 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:41.429 Available Spare: 0% 00:07:41.429 Available Spare Threshold: 0% 00:07:41.429 Life Percentage Used: 0% 00:07:41.429 Data Units Read: 1004 00:07:41.429 Data Units Written: 871 00:07:41.429 Host Read Commands: 58203 00:07:41.429 Host Write Commands: 56976 00:07:41.429 Controller Busy Time: 0 minutes 00:07:41.429 Power Cycles: 0 00:07:41.429 Power On Hours: 0 hours 00:07:41.429 Unsafe Shutdowns: 0 00:07:41.429 Unrecoverable Media Errors: 0 00:07:41.429 Lifetime Error Log Entries: 0 00:07:41.429 Warning Temperature Time: 0 minutes 00:07:41.429 Critical Temperature Time: 0 minutes 00:07:41.429 00:07:41.429 Number of Queues 00:07:41.429 ================ 00:07:41.429 Number of I/O Submission Queues: 64 00:07:41.429 Number of I/O Completion Queues: 64 00:07:41.429 00:07:41.429 ZNS Specific Controller Data 00:07:41.429 ============================ 00:07:41.429 Zone Append Size Limit: 0 00:07:41.429 00:07:41.429 00:07:41.429 Active Namespaces 00:07:41.429 ================= 00:07:41.429 Namespace ID:1 00:07:41.429 Error Recovery Timeout: Unlimited 00:07:41.429 Command Set Identifier: NVM (00h) 00:07:41.429 Deallocate: Supported 00:07:41.429 Deallocated/Unwritten Error: Supported 00:07:41.429 Deallocated Read Value: All 0x00 00:07:41.429 Deallocate in Write Zeroes: Not Supported 00:07:41.429 Deallocated Guard Field: 0xFFFF 00:07:41.429 Flush: Supported 00:07:41.429 Reservation: Not Supported 00:07:41.429 Namespace Sharing Capabilities: Private 00:07:41.429 Size (in LBAs): 1310720 (5GiB) 00:07:41.429 Capacity (in LBAs): 1310720 (5GiB) 00:07:41.429 Utilization (in LBAs): 1310720 (5GiB) 00:07:41.429 Thin Provisioning: Not Supported 00:07:41.429 Per-NS Atomic Units: No 00:07:41.429 Maximum Single Source Range Length: 128 00:07:41.429 Maximum Copy Length: 128 00:07:41.429 Maximum Source Range Count: 128 00:07:41.429 NGUID/EUI64 Never Reused: No 00:07:41.429 Namespace Write Protected: No 00:07:41.429 Number of LBA Formats: 8 00:07:41.429 Current LBA Format: LBA Format #04 00:07:41.429 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:41.429 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:41.429 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:41.429 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:41.429 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:41.429 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:41.429 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:41.429 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:41.429 00:07:41.429 NVM Specific Namespace Data 00:07:41.429 =========================== 00:07:41.429 Logical Block Storage Tag Mask: 0 00:07:41.429 Protection Information Capabilities: 00:07:41.429 16b Guard Protection Information Storage Tag Support: No 00:07:41.429 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:41.429 Storage Tag Check Read Support: No 00:07:41.429 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.429 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.429 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.429 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.429 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.429 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.429 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.429 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.429 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:41.429 23:51:47 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:41.690 ===================================================== 00:07:41.690 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:41.690 ===================================================== 00:07:41.690 Controller Capabilities/Features 00:07:41.690 ================================ 00:07:41.690 Vendor ID: 1b36 00:07:41.690 Subsystem Vendor ID: 1af4 00:07:41.690 Serial Number: 12342 00:07:41.690 Model Number: QEMU NVMe Ctrl 00:07:41.690 Firmware Version: 8.0.0 00:07:41.690 Recommended Arb Burst: 6 00:07:41.690 IEEE OUI Identifier: 00 54 52 00:07:41.690 Multi-path I/O 00:07:41.690 May have multiple subsystem ports: No 00:07:41.690 May have multiple controllers: No 00:07:41.690 Associated with SR-IOV VF: No 00:07:41.690 Max Data Transfer Size: 524288 00:07:41.690 Max Number of Namespaces: 256 00:07:41.690 Max Number of I/O Queues: 64 00:07:41.690 NVMe Specification Version (VS): 1.4 00:07:41.690 NVMe Specification Version (Identify): 1.4 00:07:41.690 Maximum Queue Entries: 2048 00:07:41.690 Contiguous Queues Required: Yes 00:07:41.690 Arbitration Mechanisms Supported 00:07:41.690 Weighted Round Robin: Not Supported 00:07:41.690 Vendor Specific: Not Supported 00:07:41.690 Reset Timeout: 7500 ms 00:07:41.690 Doorbell Stride: 4 bytes 00:07:41.690 NVM Subsystem Reset: Not Supported 00:07:41.690 Command Sets Supported 00:07:41.690 NVM Command Set: Supported 00:07:41.690 Boot Partition: Not Supported 00:07:41.690 Memory Page Size Minimum: 4096 bytes 00:07:41.690 Memory Page Size Maximum: 65536 bytes 00:07:41.690 Persistent Memory Region: Not Supported 00:07:41.690 Optional Asynchronous Events Supported 00:07:41.690 Namespace Attribute Notices: Supported 00:07:41.690 Firmware Activation Notices: Not Supported 00:07:41.690 ANA Change Notices: Not Supported 00:07:41.690 PLE Aggregate Log Change Notices: Not Supported 00:07:41.690 LBA Status Info Alert Notices: Not Supported 00:07:41.690 EGE Aggregate Log Change Notices: Not Supported 00:07:41.690 Normal NVM Subsystem Shutdown event: Not Supported 00:07:41.690 Zone Descriptor Change Notices: Not Supported 00:07:41.690 Discovery Log Change Notices: Not Supported 00:07:41.690 Controller Attributes 00:07:41.690 128-bit Host Identifier: Not Supported 00:07:41.690 Non-Operational Permissive Mode: Not Supported 00:07:41.690 NVM Sets: Not Supported 00:07:41.690 Read Recovery Levels: Not Supported 00:07:41.690 Endurance Groups: Not Supported 00:07:41.690 Predictable Latency Mode: Not Supported 00:07:41.690 Traffic Based Keep ALive: Not Supported 00:07:41.690 Namespace Granularity: Not Supported 00:07:41.690 SQ Associations: Not Supported 00:07:41.690 UUID List: Not Supported 00:07:41.690 Multi-Domain Subsystem: Not Supported 00:07:41.690 Fixed Capacity Management: Not Supported 00:07:41.690 Variable Capacity Management: Not Supported 00:07:41.690 Delete Endurance Group: Not Supported 00:07:41.690 Delete NVM Set: Not Supported 00:07:41.690 Extended LBA Formats Supported: Supported 00:07:41.690 Flexible Data Placement Supported: Not Supported 00:07:41.690 00:07:41.690 Controller Memory Buffer Support 00:07:41.690 ================================ 00:07:41.690 Supported: No 00:07:41.690 00:07:41.690 Persistent Memory Region Support 00:07:41.690 ================================ 00:07:41.690 Supported: No 00:07:41.690 00:07:41.690 Admin Command Set Attributes 00:07:41.690 ============================ 00:07:41.690 Security Send/Receive: Not Supported 00:07:41.690 Format NVM: Supported 00:07:41.690 Firmware Activate/Download: Not Supported 00:07:41.690 Namespace Management: Supported 00:07:41.690 Device Self-Test: Not Supported 00:07:41.690 Directives: Supported 00:07:41.690 NVMe-MI: Not Supported 00:07:41.690 Virtualization Management: Not Supported 00:07:41.690 Doorbell Buffer Config: Supported 00:07:41.690 Get LBA Status Capability: Not Supported 00:07:41.690 Command & Feature Lockdown Capability: Not Supported 00:07:41.690 Abort Command Limit: 4 00:07:41.690 Async Event Request Limit: 4 00:07:41.690 Number of Firmware Slots: N/A 00:07:41.690 Firmware Slot 1 Read-Only: N/A 00:07:41.690 Firmware Activation Without Reset: N/A 00:07:41.690 Multiple Update Detection Support: N/A 00:07:41.690 Firmware Update Granularity: No Information Provided 00:07:41.690 Per-Namespace SMART Log: Yes 00:07:41.690 Asymmetric Namespace Access Log Page: Not Supported 00:07:41.690 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:41.690 Command Effects Log Page: Supported 00:07:41.690 Get Log Page Extended Data: Supported 00:07:41.690 Telemetry Log Pages: Not Supported 00:07:41.690 Persistent Event Log Pages: Not Supported 00:07:41.690 Supported Log Pages Log Page: May Support 00:07:41.690 Commands Supported & Effects Log Page: Not Supported 00:07:41.690 Feature Identifiers & Effects Log Page:May Support 00:07:41.690 NVMe-MI Commands & Effects Log Page: May Support 00:07:41.690 Data Area 4 for Telemetry Log: Not Supported 00:07:41.690 Error Log Page Entries Supported: 1 00:07:41.690 Keep Alive: Not Supported 00:07:41.690 00:07:41.690 NVM Command Set Attributes 00:07:41.690 ========================== 00:07:41.690 Submission Queue Entry Size 00:07:41.690 Max: 64 00:07:41.690 Min: 64 00:07:41.690 Completion Queue Entry Size 00:07:41.690 Max: 16 00:07:41.690 Min: 16 00:07:41.690 Number of Namespaces: 256 00:07:41.691 Compare Command: Supported 00:07:41.691 Write Uncorrectable Command: Not Supported 00:07:41.691 Dataset Management Command: Supported 00:07:41.691 Write Zeroes Command: Supported 00:07:41.691 Set Features Save Field: Supported 00:07:41.691 Reservations: Not Supported 00:07:41.691 Timestamp: Supported 00:07:41.691 Copy: Supported 00:07:41.691 Volatile Write Cache: Present 00:07:41.691 Atomic Write Unit (Normal): 1 00:07:41.691 Atomic Write Unit (PFail): 1 00:07:41.691 Atomic Compare & Write Unit: 1 00:07:41.691 Fused Compare & Write: Not Supported 00:07:41.691 Scatter-Gather List 00:07:41.691 SGL Command Set: Supported 00:07:41.691 SGL Keyed: Not Supported 00:07:41.691 SGL Bit Bucket Descriptor: Not Supported 00:07:41.691 SGL Metadata Pointer: Not Supported 00:07:41.691 Oversized SGL: Not Supported 00:07:41.691 SGL Metadata Address: Not Supported 00:07:41.691 SGL Offset: Not Supported 00:07:41.691 Transport SGL Data Block: Not Supported 00:07:41.691 Replay Protected Memory Block: Not Supported 00:07:41.691 00:07:41.691 Firmware Slot Information 00:07:41.691 ========================= 00:07:41.691 Active slot: 1 00:07:41.691 Slot 1 Firmware Revision: 1.0 00:07:41.691 00:07:41.691 00:07:41.691 Commands Supported and Effects 00:07:41.691 ============================== 00:07:41.691 Admin Commands 00:07:41.691 -------------- 00:07:41.691 Delete I/O Submission Queue (00h): Supported 00:07:41.691 Create I/O Submission Queue (01h): Supported 00:07:41.691 Get Log Page (02h): Supported 00:07:41.691 Delete I/O Completion Queue (04h): Supported 00:07:41.691 Create I/O Completion Queue (05h): Supported 00:07:41.691 Identify (06h): Supported 00:07:41.691 Abort (08h): Supported 00:07:41.691 Set Features (09h): Supported 00:07:41.691 Get Features (0Ah): Supported 00:07:41.691 Asynchronous Event Request (0Ch): Supported 00:07:41.691 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:41.691 Directive Send (19h): Supported 00:07:41.691 Directive Receive (1Ah): Supported 00:07:41.691 Virtualization Management (1Ch): Supported 00:07:41.691 Doorbell Buffer Config (7Ch): Supported 00:07:41.691 Format NVM (80h): Supported LBA-Change 00:07:41.691 I/O Commands 00:07:41.691 ------------ 00:07:41.691 Flush (00h): Supported LBA-Change 00:07:41.691 Write (01h): Supported LBA-Change 00:07:41.691 Read (02h): Supported 00:07:41.691 Compare (05h): Supported 00:07:41.691 Write Zeroes (08h): Supported LBA-Change 00:07:41.691 Dataset Management (09h): Supported LBA-Change 00:07:41.691 Unknown (0Ch): Supported 00:07:41.691 Unknown (12h): Supported 00:07:41.691 Copy (19h): Supported LBA-Change 00:07:41.691 Unknown (1Dh): Supported LBA-Change 00:07:41.691 00:07:41.691 Error Log 00:07:41.691 ========= 00:07:41.691 00:07:41.691 Arbitration 00:07:41.691 =========== 00:07:41.691 Arbitration Burst: no limit 00:07:41.691 00:07:41.691 Power Management 00:07:41.691 ================ 00:07:41.691 Number of Power States: 1 00:07:41.691 Current Power State: Power State #0 00:07:41.691 Power State #0: 00:07:41.691 Max Power: 25.00 W 00:07:41.691 Non-Operational State: Operational 00:07:41.691 Entry Latency: 16 microseconds 00:07:41.691 Exit Latency: 4 microseconds 00:07:41.691 Relative Read Throughput: 0 00:07:41.691 Relative Read Latency: 0 00:07:41.691 Relative Write Throughput: 0 00:07:41.691 Relative Write Latency: 0 00:07:41.691 Idle Power: Not Reported 00:07:41.691 Active Power: Not Reported 00:07:41.691 Non-Operational Permissive Mode: Not Supported 00:07:41.691 00:07:41.691 Health Information 00:07:41.691 ================== 00:07:41.691 Critical Warnings: 00:07:41.691 Available Spare Space: OK 00:07:41.691 Temperature: OK 00:07:41.691 Device Reliability: OK 00:07:41.691 Read Only: No 00:07:41.691 Volatile Memory Backup: OK 00:07:41.691 Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.691 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:41.691 Available Spare: 0% 00:07:41.691 Available Spare Threshold: 0% 00:07:41.691 Life Percentage Used: 0% 00:07:41.691 Data Units Read: 2146 00:07:41.691 Data Units Written: 1933 00:07:41.691 Host Read Commands: 119547 00:07:41.691 Host Write Commands: 117816 00:07:41.691 Controller Busy Time: 0 minutes 00:07:41.691 Power Cycles: 0 00:07:41.691 Power On Hours: 0 hours 00:07:41.691 Unsafe Shutdowns: 0 00:07:41.691 Unrecoverable Media Errors: 0 00:07:41.691 Lifetime Error Log Entries: 0 00:07:41.691 Warning Temperature Time: 0 minutes 00:07:41.691 Critical Temperature Time: 0 minutes 00:07:41.691 00:07:41.691 Number of Queues 00:07:41.691 ================ 00:07:41.691 Number of I/O Submission Queues: 64 00:07:41.691 Number of I/O Completion Queues: 64 00:07:41.691 00:07:41.691 ZNS Specific Controller Data 00:07:41.691 ============================ 00:07:41.691 Zone Append Size Limit: 0 00:07:41.691 00:07:41.691 00:07:41.691 Active Namespaces 00:07:41.691 ================= 00:07:41.691 Namespace ID:1 00:07:41.691 Error Recovery Timeout: Unlimited 00:07:41.691 Command Set Identifier: NVM (00h) 00:07:41.691 Deallocate: Supported 00:07:41.691 Deallocated/Unwritten Error: Supported 00:07:41.691 Deallocated Read Value: All 0x00 00:07:41.691 Deallocate in Write Zeroes: Not Supported 00:07:41.691 Deallocated Guard Field: 0xFFFF 00:07:41.691 Flush: Supported 00:07:41.691 Reservation: Not Supported 00:07:41.691 Namespace Sharing Capabilities: Private 00:07:41.691 Size (in LBAs): 1048576 (4GiB) 00:07:41.691 Capacity (in LBAs): 1048576 (4GiB) 00:07:41.691 Utilization (in LBAs): 1048576 (4GiB) 00:07:41.691 Thin Provisioning: Not Supported 00:07:41.691 Per-NS Atomic Units: No 00:07:41.691 Maximum Single Source Range Length: 128 00:07:41.691 Maximum Copy Length: 128 00:07:41.691 Maximum Source Range Count: 128 00:07:41.691 NGUID/EUI64 Never Reused: No 00:07:41.691 Namespace Write Protected: No 00:07:41.691 Number of LBA Formats: 8 00:07:41.691 Current LBA Format: LBA Format #04 00:07:41.691 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:41.691 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:41.691 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:41.691 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:41.691 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:41.691 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:41.691 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:41.691 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:41.691 00:07:41.691 NVM Specific Namespace Data 00:07:41.691 =========================== 00:07:41.691 Logical Block Storage Tag Mask: 0 00:07:41.691 Protection Information Capabilities: 00:07:41.691 16b Guard Protection Information Storage Tag Support: No 00:07:41.691 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:41.691 Storage Tag Check Read Support: No 00:07:41.691 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.691 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.691 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.691 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.691 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.691 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.691 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.691 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.691 Namespace ID:2 00:07:41.691 Error Recovery Timeout: Unlimited 00:07:41.691 Command Set Identifier: NVM (00h) 00:07:41.691 Deallocate: Supported 00:07:41.691 Deallocated/Unwritten Error: Supported 00:07:41.691 Deallocated Read Value: All 0x00 00:07:41.691 Deallocate in Write Zeroes: Not Supported 00:07:41.691 Deallocated Guard Field: 0xFFFF 00:07:41.691 Flush: Supported 00:07:41.691 Reservation: Not Supported 00:07:41.691 Namespace Sharing Capabilities: Private 00:07:41.691 Size (in LBAs): 1048576 (4GiB) 00:07:41.691 Capacity (in LBAs): 1048576 (4GiB) 00:07:41.691 Utilization (in LBAs): 1048576 (4GiB) 00:07:41.691 Thin Provisioning: Not Supported 00:07:41.691 Per-NS Atomic Units: No 00:07:41.691 Maximum Single Source Range Length: 128 00:07:41.691 Maximum Copy Length: 128 00:07:41.691 Maximum Source Range Count: 128 00:07:41.691 NGUID/EUI64 Never Reused: No 00:07:41.691 Namespace Write Protected: No 00:07:41.691 Number of LBA Formats: 8 00:07:41.691 Current LBA Format: LBA Format #04 00:07:41.691 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:41.691 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:41.691 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:41.692 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:41.692 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:41.692 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:41.692 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:41.692 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:41.692 00:07:41.692 NVM Specific Namespace Data 00:07:41.692 =========================== 00:07:41.692 Logical Block Storage Tag Mask: 0 00:07:41.692 Protection Information Capabilities: 00:07:41.692 16b Guard Protection Information Storage Tag Support: No 00:07:41.692 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:41.692 Storage Tag Check Read Support: No 00:07:41.692 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Namespace ID:3 00:07:41.692 Error Recovery Timeout: Unlimited 00:07:41.692 Command Set Identifier: NVM (00h) 00:07:41.692 Deallocate: Supported 00:07:41.692 Deallocated/Unwritten Error: Supported 00:07:41.692 Deallocated Read Value: All 0x00 00:07:41.692 Deallocate in Write Zeroes: Not Supported 00:07:41.692 Deallocated Guard Field: 0xFFFF 00:07:41.692 Flush: Supported 00:07:41.692 Reservation: Not Supported 00:07:41.692 Namespace Sharing Capabilities: Private 00:07:41.692 Size (in LBAs): 1048576 (4GiB) 00:07:41.692 Capacity (in LBAs): 1048576 (4GiB) 00:07:41.692 Utilization (in LBAs): 1048576 (4GiB) 00:07:41.692 Thin Provisioning: Not Supported 00:07:41.692 Per-NS Atomic Units: No 00:07:41.692 Maximum Single Source Range Length: 128 00:07:41.692 Maximum Copy Length: 128 00:07:41.692 Maximum Source Range Count: 128 00:07:41.692 NGUID/EUI64 Never Reused: No 00:07:41.692 Namespace Write Protected: No 00:07:41.692 Number of LBA Formats: 8 00:07:41.692 Current LBA Format: LBA Format #04 00:07:41.692 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:41.692 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:41.692 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:41.692 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:41.692 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:41.692 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:41.692 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:41.692 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:41.692 00:07:41.692 NVM Specific Namespace Data 00:07:41.692 =========================== 00:07:41.692 Logical Block Storage Tag Mask: 0 00:07:41.692 Protection Information Capabilities: 00:07:41.692 16b Guard Protection Information Storage Tag Support: No 00:07:41.692 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:41.692 Storage Tag Check Read Support: No 00:07:41.692 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.692 23:51:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:41.692 23:51:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:41.966 ===================================================== 00:07:41.966 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:41.966 ===================================================== 00:07:41.966 Controller Capabilities/Features 00:07:41.966 ================================ 00:07:41.966 Vendor ID: 1b36 00:07:41.966 Subsystem Vendor ID: 1af4 00:07:41.966 Serial Number: 12343 00:07:41.966 Model Number: QEMU NVMe Ctrl 00:07:41.966 Firmware Version: 8.0.0 00:07:41.966 Recommended Arb Burst: 6 00:07:41.966 IEEE OUI Identifier: 00 54 52 00:07:41.966 Multi-path I/O 00:07:41.966 May have multiple subsystem ports: No 00:07:41.966 May have multiple controllers: Yes 00:07:41.966 Associated with SR-IOV VF: No 00:07:41.966 Max Data Transfer Size: 524288 00:07:41.966 Max Number of Namespaces: 256 00:07:41.966 Max Number of I/O Queues: 64 00:07:41.966 NVMe Specification Version (VS): 1.4 00:07:41.966 NVMe Specification Version (Identify): 1.4 00:07:41.966 Maximum Queue Entries: 2048 00:07:41.966 Contiguous Queues Required: Yes 00:07:41.966 Arbitration Mechanisms Supported 00:07:41.966 Weighted Round Robin: Not Supported 00:07:41.966 Vendor Specific: Not Supported 00:07:41.966 Reset Timeout: 7500 ms 00:07:41.966 Doorbell Stride: 4 bytes 00:07:41.966 NVM Subsystem Reset: Not Supported 00:07:41.967 Command Sets Supported 00:07:41.967 NVM Command Set: Supported 00:07:41.967 Boot Partition: Not Supported 00:07:41.967 Memory Page Size Minimum: 4096 bytes 00:07:41.967 Memory Page Size Maximum: 65536 bytes 00:07:41.967 Persistent Memory Region: Not Supported 00:07:41.967 Optional Asynchronous Events Supported 00:07:41.967 Namespace Attribute Notices: Supported 00:07:41.967 Firmware Activation Notices: Not Supported 00:07:41.967 ANA Change Notices: Not Supported 00:07:41.967 PLE Aggregate Log Change Notices: Not Supported 00:07:41.967 LBA Status Info Alert Notices: Not Supported 00:07:41.967 EGE Aggregate Log Change Notices: Not Supported 00:07:41.967 Normal NVM Subsystem Shutdown event: Not Supported 00:07:41.967 Zone Descriptor Change Notices: Not Supported 00:07:41.967 Discovery Log Change Notices: Not Supported 00:07:41.967 Controller Attributes 00:07:41.967 128-bit Host Identifier: Not Supported 00:07:41.967 Non-Operational Permissive Mode: Not Supported 00:07:41.967 NVM Sets: Not Supported 00:07:41.967 Read Recovery Levels: Not Supported 00:07:41.967 Endurance Groups: Supported 00:07:41.967 Predictable Latency Mode: Not Supported 00:07:41.967 Traffic Based Keep ALive: Not Supported 00:07:41.967 Namespace Granularity: Not Supported 00:07:41.967 SQ Associations: Not Supported 00:07:41.967 UUID List: Not Supported 00:07:41.967 Multi-Domain Subsystem: Not Supported 00:07:41.967 Fixed Capacity Management: Not Supported 00:07:41.967 Variable Capacity Management: Not Supported 00:07:41.967 Delete Endurance Group: Not Supported 00:07:41.967 Delete NVM Set: Not Supported 00:07:41.967 Extended LBA Formats Supported: Supported 00:07:41.967 Flexible Data Placement Supported: Supported 00:07:41.967 00:07:41.967 Controller Memory Buffer Support 00:07:41.967 ================================ 00:07:41.967 Supported: No 00:07:41.967 00:07:41.967 Persistent Memory Region Support 00:07:41.967 ================================ 00:07:41.967 Supported: No 00:07:41.967 00:07:41.967 Admin Command Set Attributes 00:07:41.967 ============================ 00:07:41.967 Security Send/Receive: Not Supported 00:07:41.967 Format NVM: Supported 00:07:41.967 Firmware Activate/Download: Not Supported 00:07:41.967 Namespace Management: Supported 00:07:41.967 Device Self-Test: Not Supported 00:07:41.967 Directives: Supported 00:07:41.967 NVMe-MI: Not Supported 00:07:41.967 Virtualization Management: Not Supported 00:07:41.967 Doorbell Buffer Config: Supported 00:07:41.967 Get LBA Status Capability: Not Supported 00:07:41.967 Command & Feature Lockdown Capability: Not Supported 00:07:41.967 Abort Command Limit: 4 00:07:41.967 Async Event Request Limit: 4 00:07:41.967 Number of Firmware Slots: N/A 00:07:41.967 Firmware Slot 1 Read-Only: N/A 00:07:41.967 Firmware Activation Without Reset: N/A 00:07:41.967 Multiple Update Detection Support: N/A 00:07:41.967 Firmware Update Granularity: No Information Provided 00:07:41.967 Per-Namespace SMART Log: Yes 00:07:41.967 Asymmetric Namespace Access Log Page: Not Supported 00:07:41.967 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:41.967 Command Effects Log Page: Supported 00:07:41.967 Get Log Page Extended Data: Supported 00:07:41.967 Telemetry Log Pages: Not Supported 00:07:41.967 Persistent Event Log Pages: Not Supported 00:07:41.967 Supported Log Pages Log Page: May Support 00:07:41.967 Commands Supported & Effects Log Page: Not Supported 00:07:41.967 Feature Identifiers & Effects Log Page:May Support 00:07:41.967 NVMe-MI Commands & Effects Log Page: May Support 00:07:41.967 Data Area 4 for Telemetry Log: Not Supported 00:07:41.967 Error Log Page Entries Supported: 1 00:07:41.967 Keep Alive: Not Supported 00:07:41.967 00:07:41.967 NVM Command Set Attributes 00:07:41.967 ========================== 00:07:41.967 Submission Queue Entry Size 00:07:41.967 Max: 64 00:07:41.967 Min: 64 00:07:41.967 Completion Queue Entry Size 00:07:41.967 Max: 16 00:07:41.967 Min: 16 00:07:41.967 Number of Namespaces: 256 00:07:41.967 Compare Command: Supported 00:07:41.967 Write Uncorrectable Command: Not Supported 00:07:41.967 Dataset Management Command: Supported 00:07:41.967 Write Zeroes Command: Supported 00:07:41.967 Set Features Save Field: Supported 00:07:41.967 Reservations: Not Supported 00:07:41.967 Timestamp: Supported 00:07:41.967 Copy: Supported 00:07:41.967 Volatile Write Cache: Present 00:07:41.967 Atomic Write Unit (Normal): 1 00:07:41.967 Atomic Write Unit (PFail): 1 00:07:41.967 Atomic Compare & Write Unit: 1 00:07:41.967 Fused Compare & Write: Not Supported 00:07:41.967 Scatter-Gather List 00:07:41.967 SGL Command Set: Supported 00:07:41.967 SGL Keyed: Not Supported 00:07:41.967 SGL Bit Bucket Descriptor: Not Supported 00:07:41.967 SGL Metadata Pointer: Not Supported 00:07:41.967 Oversized SGL: Not Supported 00:07:41.967 SGL Metadata Address: Not Supported 00:07:41.967 SGL Offset: Not Supported 00:07:41.967 Transport SGL Data Block: Not Supported 00:07:41.967 Replay Protected Memory Block: Not Supported 00:07:41.967 00:07:41.967 Firmware Slot Information 00:07:41.967 ========================= 00:07:41.967 Active slot: 1 00:07:41.967 Slot 1 Firmware Revision: 1.0 00:07:41.967 00:07:41.967 00:07:41.967 Commands Supported and Effects 00:07:41.967 ============================== 00:07:41.967 Admin Commands 00:07:41.967 -------------- 00:07:41.967 Delete I/O Submission Queue (00h): Supported 00:07:41.967 Create I/O Submission Queue (01h): Supported 00:07:41.967 Get Log Page (02h): Supported 00:07:41.967 Delete I/O Completion Queue (04h): Supported 00:07:41.967 Create I/O Completion Queue (05h): Supported 00:07:41.967 Identify (06h): Supported 00:07:41.967 Abort (08h): Supported 00:07:41.967 Set Features (09h): Supported 00:07:41.967 Get Features (0Ah): Supported 00:07:41.967 Asynchronous Event Request (0Ch): Supported 00:07:41.967 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:41.967 Directive Send (19h): Supported 00:07:41.967 Directive Receive (1Ah): Supported 00:07:41.967 Virtualization Management (1Ch): Supported 00:07:41.967 Doorbell Buffer Config (7Ch): Supported 00:07:41.967 Format NVM (80h): Supported LBA-Change 00:07:41.967 I/O Commands 00:07:41.967 ------------ 00:07:41.967 Flush (00h): Supported LBA-Change 00:07:41.967 Write (01h): Supported LBA-Change 00:07:41.967 Read (02h): Supported 00:07:41.967 Compare (05h): Supported 00:07:41.967 Write Zeroes (08h): Supported LBA-Change 00:07:41.967 Dataset Management (09h): Supported LBA-Change 00:07:41.967 Unknown (0Ch): Supported 00:07:41.967 Unknown (12h): Supported 00:07:41.967 Copy (19h): Supported LBA-Change 00:07:41.967 Unknown (1Dh): Supported LBA-Change 00:07:41.967 00:07:41.967 Error Log 00:07:41.967 ========= 00:07:41.967 00:07:41.967 Arbitration 00:07:41.967 =========== 00:07:41.967 Arbitration Burst: no limit 00:07:41.967 00:07:41.967 Power Management 00:07:41.967 ================ 00:07:41.967 Number of Power States: 1 00:07:41.967 Current Power State: Power State #0 00:07:41.967 Power State #0: 00:07:41.967 Max Power: 25.00 W 00:07:41.967 Non-Operational State: Operational 00:07:41.967 Entry Latency: 16 microseconds 00:07:41.967 Exit Latency: 4 microseconds 00:07:41.967 Relative Read Throughput: 0 00:07:41.967 Relative Read Latency: 0 00:07:41.967 Relative Write Throughput: 0 00:07:41.967 Relative Write Latency: 0 00:07:41.967 Idle Power: Not Reported 00:07:41.967 Active Power: Not Reported 00:07:41.967 Non-Operational Permissive Mode: Not Supported 00:07:41.967 00:07:41.967 Health Information 00:07:41.967 ================== 00:07:41.967 Critical Warnings: 00:07:41.967 Available Spare Space: OK 00:07:41.967 Temperature: OK 00:07:41.967 Device Reliability: OK 00:07:41.967 Read Only: No 00:07:41.967 Volatile Memory Backup: OK 00:07:41.967 Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.967 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:41.967 Available Spare: 0% 00:07:41.967 Available Spare Threshold: 0% 00:07:41.967 Life Percentage Used: 0% 00:07:41.967 Data Units Read: 965 00:07:41.967 Data Units Written: 894 00:07:41.967 Host Read Commands: 41936 00:07:41.967 Host Write Commands: 41359 00:07:41.967 Controller Busy Time: 0 minutes 00:07:41.967 Power Cycles: 0 00:07:41.967 Power On Hours: 0 hours 00:07:41.967 Unsafe Shutdowns: 0 00:07:41.967 Unrecoverable Media Errors: 0 00:07:41.967 Lifetime Error Log Entries: 0 00:07:41.967 Warning Temperature Time: 0 minutes 00:07:41.967 Critical Temperature Time: 0 minutes 00:07:41.967 00:07:41.967 Number of Queues 00:07:41.968 ================ 00:07:41.968 Number of I/O Submission Queues: 64 00:07:41.968 Number of I/O Completion Queues: 64 00:07:41.968 00:07:41.968 ZNS Specific Controller Data 00:07:41.968 ============================ 00:07:41.968 Zone Append Size Limit: 0 00:07:41.968 00:07:41.968 00:07:41.968 Active Namespaces 00:07:41.968 ================= 00:07:41.968 Namespace ID:1 00:07:41.968 Error Recovery Timeout: Unlimited 00:07:41.968 Command Set Identifier: NVM (00h) 00:07:41.968 Deallocate: Supported 00:07:41.968 Deallocated/Unwritten Error: Supported 00:07:41.968 Deallocated Read Value: All 0x00 00:07:41.968 Deallocate in Write Zeroes: Not Supported 00:07:41.968 Deallocated Guard Field: 0xFFFF 00:07:41.968 Flush: Supported 00:07:41.968 Reservation: Not Supported 00:07:41.968 Namespace Sharing Capabilities: Multiple Controllers 00:07:41.968 Size (in LBAs): 262144 (1GiB) 00:07:41.968 Capacity (in LBAs): 262144 (1GiB) 00:07:41.968 Utilization (in LBAs): 262144 (1GiB) 00:07:41.968 Thin Provisioning: Not Supported 00:07:41.968 Per-NS Atomic Units: No 00:07:41.968 Maximum Single Source Range Length: 128 00:07:41.968 Maximum Copy Length: 128 00:07:41.968 Maximum Source Range Count: 128 00:07:41.968 NGUID/EUI64 Never Reused: No 00:07:41.968 Namespace Write Protected: No 00:07:41.968 Endurance group ID: 1 00:07:41.968 Number of LBA Formats: 8 00:07:41.968 Current LBA Format: LBA Format #04 00:07:41.968 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:41.968 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:41.968 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:41.968 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:41.968 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:41.968 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:41.968 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:41.968 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:41.968 00:07:41.968 Get Feature FDP: 00:07:41.968 ================ 00:07:41.968 Enabled: Yes 00:07:41.968 FDP configuration index: 0 00:07:41.968 00:07:41.968 FDP configurations log page 00:07:41.968 =========================== 00:07:41.968 Number of FDP configurations: 1 00:07:41.968 Version: 0 00:07:41.968 Size: 112 00:07:41.968 FDP Configuration Descriptor: 0 00:07:41.968 Descriptor Size: 96 00:07:41.968 Reclaim Group Identifier format: 2 00:07:41.968 FDP Volatile Write Cache: Not Present 00:07:41.968 FDP Configuration: Valid 00:07:41.968 Vendor Specific Size: 0 00:07:41.968 Number of Reclaim Groups: 2 00:07:41.968 Number of Recalim Unit Handles: 8 00:07:41.968 Max Placement Identifiers: 128 00:07:41.968 Number of Namespaces Suppprted: 256 00:07:41.968 Reclaim unit Nominal Size: 6000000 bytes 00:07:41.968 Estimated Reclaim Unit Time Limit: Not Reported 00:07:41.968 RUH Desc #000: RUH Type: Initially Isolated 00:07:41.968 RUH Desc #001: RUH Type: Initially Isolated 00:07:41.968 RUH Desc #002: RUH Type: Initially Isolated 00:07:41.968 RUH Desc #003: RUH Type: Initially Isolated 00:07:41.968 RUH Desc #004: RUH Type: Initially Isolated 00:07:41.968 RUH Desc #005: RUH Type: Initially Isolated 00:07:41.968 RUH Desc #006: RUH Type: Initially Isolated 00:07:41.968 RUH Desc #007: RUH Type: Initially Isolated 00:07:41.968 00:07:41.968 FDP reclaim unit handle usage log page 00:07:41.968 ====================================== 00:07:41.968 Number of Reclaim Unit Handles: 8 00:07:41.968 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:41.968 RUH Usage Desc #001: RUH Attributes: Unused 00:07:41.968 RUH Usage Desc #002: RUH Attributes: Unused 00:07:41.968 RUH Usage Desc #003: RUH Attributes: Unused 00:07:41.968 RUH Usage Desc #004: RUH Attributes: Unused 00:07:41.968 RUH Usage Desc #005: RUH Attributes: Unused 00:07:41.968 RUH Usage Desc #006: RUH Attributes: Unused 00:07:41.968 RUH Usage Desc #007: RUH Attributes: Unused 00:07:41.968 00:07:41.968 FDP statistics log page 00:07:41.968 ======================= 00:07:41.968 Host bytes with metadata written: 556572672 00:07:41.968 Media bytes with metadata written: 556781568 00:07:41.968 Media bytes erased: 0 00:07:41.968 00:07:41.968 FDP events log page 00:07:41.968 =================== 00:07:41.968 Number of FDP events: 0 00:07:41.968 00:07:41.968 NVM Specific Namespace Data 00:07:41.968 =========================== 00:07:41.968 Logical Block Storage Tag Mask: 0 00:07:41.968 Protection Information Capabilities: 00:07:41.968 16b Guard Protection Information Storage Tag Support: No 00:07:41.968 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:41.968 Storage Tag Check Read Support: No 00:07:41.968 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.968 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.968 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.968 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.968 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.968 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.968 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.968 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.968 00:07:41.968 real 0m1.242s 00:07:41.968 user 0m0.439s 00:07:41.968 sys 0m0.574s 00:07:41.968 23:51:48 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.968 23:51:48 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:41.968 ************************************ 00:07:41.968 END TEST nvme_identify 00:07:41.968 ************************************ 00:07:41.968 23:51:48 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:41.968 23:51:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.968 23:51:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.968 23:51:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.968 ************************************ 00:07:41.968 START TEST nvme_perf 00:07:41.968 ************************************ 00:07:41.968 23:51:48 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:41.968 23:51:48 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:43.345 Initializing NVMe Controllers 00:07:43.345 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:43.345 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:43.345 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:43.345 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:43.345 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:43.345 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:43.345 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:43.345 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:43.345 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:43.345 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:43.345 Initialization complete. Launching workers. 00:07:43.345 ======================================================== 00:07:43.345 Latency(us) 00:07:43.345 Device Information : IOPS MiB/s Average min max 00:07:43.345 PCIE (0000:00:10.0) NSID 1 from core 0: 19542.51 229.01 6558.45 5622.79 27497.12 00:07:43.345 PCIE (0000:00:11.0) NSID 1 from core 0: 19542.51 229.01 6549.50 5706.45 26051.90 00:07:43.345 PCIE (0000:00:13.0) NSID 1 from core 0: 19542.51 229.01 6539.45 5719.38 25294.36 00:07:43.345 PCIE (0000:00:12.0) NSID 1 from core 0: 19542.51 229.01 6528.87 5736.56 23910.76 00:07:43.345 PCIE (0000:00:12.0) NSID 2 from core 0: 19542.51 229.01 6518.65 5726.92 22684.31 00:07:43.345 PCIE (0000:00:12.0) NSID 3 from core 0: 19542.51 229.01 6508.46 5728.62 21922.04 00:07:43.345 ======================================================== 00:07:43.345 Total : 117255.07 1374.08 6533.90 5622.79 27497.12 00:07:43.345 00:07:43.345 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:43.345 ================================================================================= 00:07:43.345 1.00000% : 5747.003us 00:07:43.345 10.00000% : 5898.240us 00:07:43.345 25.00000% : 6074.683us 00:07:43.345 50.00000% : 6351.951us 00:07:43.345 75.00000% : 6654.425us 00:07:43.345 90.00000% : 6805.662us 00:07:43.345 95.00000% : 7360.197us 00:07:43.345 98.00000% : 9578.338us 00:07:43.345 99.00000% : 12098.954us 00:07:43.345 99.50000% : 20669.046us 00:07:43.345 99.90000% : 27020.997us 00:07:43.345 99.99000% : 27625.945us 00:07:43.345 99.99900% : 27625.945us 00:07:43.345 99.99990% : 27625.945us 00:07:43.345 99.99999% : 27625.945us 00:07:43.345 00:07:43.345 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:43.345 ================================================================================= 00:07:43.345 1.00000% : 5822.622us 00:07:43.345 10.00000% : 5973.858us 00:07:43.345 25.00000% : 6125.095us 00:07:43.345 50.00000% : 6351.951us 00:07:43.345 75.00000% : 6604.012us 00:07:43.345 90.00000% : 6755.249us 00:07:43.345 95.00000% : 7309.785us 00:07:43.345 98.00000% : 9628.751us 00:07:43.345 99.00000% : 11897.305us 00:07:43.345 99.50000% : 19358.326us 00:07:43.345 99.90000% : 25609.452us 00:07:43.345 99.99000% : 26214.400us 00:07:43.345 99.99900% : 26214.400us 00:07:43.345 99.99990% : 26214.400us 00:07:43.345 99.99999% : 26214.400us 00:07:43.345 00:07:43.345 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:43.345 ================================================================================= 00:07:43.345 1.00000% : 5847.828us 00:07:43.345 10.00000% : 5973.858us 00:07:43.345 25.00000% : 6125.095us 00:07:43.345 50.00000% : 6351.951us 00:07:43.345 75.00000% : 6604.012us 00:07:43.345 90.00000% : 6755.249us 00:07:43.345 95.00000% : 7309.785us 00:07:43.345 98.00000% : 9880.812us 00:07:43.345 99.00000% : 11947.717us 00:07:43.345 99.50000% : 18047.606us 00:07:43.345 99.90000% : 24802.855us 00:07:43.345 99.99000% : 25306.978us 00:07:43.345 99.99900% : 25306.978us 00:07:43.345 99.99990% : 25306.978us 00:07:43.345 99.99999% : 25306.978us 00:07:43.345 00:07:43.345 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:43.345 ================================================================================= 00:07:43.345 1.00000% : 5847.828us 00:07:43.345 10.00000% : 5973.858us 00:07:43.345 25.00000% : 6125.095us 00:07:43.345 50.00000% : 6351.951us 00:07:43.345 75.00000% : 6604.012us 00:07:43.345 90.00000% : 6755.249us 00:07:43.345 95.00000% : 7309.785us 00:07:43.345 98.00000% : 9931.225us 00:07:43.345 99.00000% : 11544.418us 00:07:43.345 99.50000% : 16232.763us 00:07:43.345 99.90000% : 23290.486us 00:07:43.345 99.99000% : 23895.434us 00:07:43.345 99.99900% : 23996.258us 00:07:43.345 99.99990% : 23996.258us 00:07:43.345 99.99999% : 23996.258us 00:07:43.345 00:07:43.345 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:43.345 ================================================================================= 00:07:43.345 1.00000% : 5847.828us 00:07:43.345 10.00000% : 5973.858us 00:07:43.345 25.00000% : 6125.095us 00:07:43.345 50.00000% : 6351.951us 00:07:43.345 75.00000% : 6604.012us 00:07:43.345 90.00000% : 6755.249us 00:07:43.345 95.00000% : 7309.785us 00:07:43.345 98.00000% : 9880.812us 00:07:43.345 99.00000% : 10989.883us 00:07:43.345 99.50000% : 14518.745us 00:07:43.345 99.90000% : 22080.591us 00:07:43.345 99.99000% : 22685.538us 00:07:43.345 99.99900% : 22685.538us 00:07:43.345 99.99990% : 22685.538us 00:07:43.345 99.99999% : 22685.538us 00:07:43.345 00:07:43.345 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:43.345 ================================================================================= 00:07:43.345 1.00000% : 5847.828us 00:07:43.345 10.00000% : 5973.858us 00:07:43.345 25.00000% : 6125.095us 00:07:43.345 50.00000% : 6351.951us 00:07:43.345 75.00000% : 6604.012us 00:07:43.345 90.00000% : 6755.249us 00:07:43.345 95.00000% : 7309.785us 00:07:43.345 98.00000% : 9628.751us 00:07:43.345 99.00000% : 10435.348us 00:07:43.345 99.50000% : 13308.849us 00:07:43.345 99.90000% : 21374.818us 00:07:43.345 99.99000% : 21979.766us 00:07:43.345 99.99900% : 21979.766us 00:07:43.345 99.99990% : 21979.766us 00:07:43.345 99.99999% : 21979.766us 00:07:43.345 00:07:43.345 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:43.345 ============================================================================== 00:07:43.345 Range in us Cumulative IO count 00:07:43.345 5620.972 - 5646.178: 0.0153% ( 3) 00:07:43.345 5646.178 - 5671.385: 0.0919% ( 15) 00:07:43.345 5671.385 - 5696.591: 0.2145% ( 24) 00:07:43.345 5696.591 - 5721.797: 0.5668% ( 69) 00:07:43.345 5721.797 - 5747.003: 1.0927% ( 103) 00:07:43.345 5747.003 - 5772.209: 2.0323% ( 184) 00:07:43.345 5772.209 - 5797.415: 3.2629% ( 241) 00:07:43.345 5797.415 - 5822.622: 4.6364% ( 269) 00:07:43.345 5822.622 - 5847.828: 6.2653% ( 319) 00:07:43.345 5847.828 - 5873.034: 8.3691% ( 412) 00:07:43.345 5873.034 - 5898.240: 10.4882% ( 415) 00:07:43.345 5898.240 - 5923.446: 12.6123% ( 416) 00:07:43.345 5923.446 - 5948.652: 14.8131% ( 431) 00:07:43.345 5948.652 - 5973.858: 16.8811% ( 405) 00:07:43.345 5973.858 - 5999.065: 18.9645% ( 408) 00:07:43.345 5999.065 - 6024.271: 21.3746% ( 472) 00:07:43.345 6024.271 - 6049.477: 23.4528% ( 407) 00:07:43.345 6049.477 - 6074.683: 25.7812% ( 456) 00:07:43.345 6074.683 - 6099.889: 27.9361% ( 422) 00:07:43.345 6099.889 - 6125.095: 30.3054% ( 464) 00:07:43.345 6125.095 - 6150.302: 32.4704% ( 424) 00:07:43.345 6150.302 - 6175.508: 34.8448% ( 465) 00:07:43.345 6175.508 - 6200.714: 37.0404% ( 430) 00:07:43.345 6200.714 - 6225.920: 39.4404% ( 470) 00:07:43.345 6225.920 - 6251.126: 41.7535% ( 453) 00:07:43.345 6251.126 - 6276.332: 44.0768% ( 455) 00:07:43.345 6276.332 - 6301.538: 46.3184% ( 439) 00:07:43.345 6301.538 - 6326.745: 48.7898% ( 484) 00:07:43.345 6326.745 - 6351.951: 50.9906% ( 431) 00:07:43.345 6351.951 - 6377.157: 53.4058% ( 473) 00:07:43.345 6377.157 - 6402.363: 55.6117% ( 432) 00:07:43.345 6402.363 - 6427.569: 57.9197% ( 452) 00:07:43.345 6427.569 - 6452.775: 60.3196% ( 470) 00:07:43.345 6452.775 - 6503.188: 65.0429% ( 925) 00:07:43.345 6503.188 - 6553.600: 69.7049% ( 913) 00:07:43.345 6553.600 - 6604.012: 74.4741% ( 934) 00:07:43.345 6604.012 - 6654.425: 79.1105% ( 908) 00:07:43.346 6654.425 - 6704.837: 83.5784% ( 875) 00:07:43.346 6704.837 - 6755.249: 87.3366% ( 736) 00:07:43.346 6755.249 - 6805.662: 90.2114% ( 563) 00:07:43.346 6805.662 - 6856.074: 92.0139% ( 353) 00:07:43.346 6856.074 - 6906.486: 92.8666% ( 167) 00:07:43.346 6906.486 - 6956.898: 93.4181% ( 108) 00:07:43.346 6956.898 - 7007.311: 93.7960% ( 74) 00:07:43.346 7007.311 - 7057.723: 94.0768% ( 55) 00:07:43.346 7057.723 - 7108.135: 94.3525% ( 54) 00:07:43.346 7108.135 - 7158.548: 94.5568% ( 40) 00:07:43.346 7158.548 - 7208.960: 94.7100% ( 30) 00:07:43.346 7208.960 - 7259.372: 94.8529% ( 28) 00:07:43.346 7259.372 - 7309.785: 94.9653% ( 22) 00:07:43.346 7309.785 - 7360.197: 95.0674% ( 20) 00:07:43.346 7360.197 - 7410.609: 95.1542% ( 17) 00:07:43.346 7410.609 - 7461.022: 95.2359% ( 16) 00:07:43.346 7461.022 - 7511.434: 95.3023% ( 13) 00:07:43.346 7511.434 - 7561.846: 95.3533% ( 10) 00:07:43.346 7561.846 - 7612.258: 95.3840% ( 6) 00:07:43.346 7612.258 - 7662.671: 95.4555% ( 14) 00:07:43.346 7662.671 - 7713.083: 95.5065% ( 10) 00:07:43.346 7713.083 - 7763.495: 95.5423% ( 7) 00:07:43.346 7763.495 - 7813.908: 95.5780% ( 7) 00:07:43.346 7813.908 - 7864.320: 95.6087% ( 6) 00:07:43.346 7864.320 - 7914.732: 95.6648% ( 11) 00:07:43.346 7914.732 - 7965.145: 95.7159% ( 10) 00:07:43.346 7965.145 - 8015.557: 95.7516% ( 7) 00:07:43.346 8015.557 - 8065.969: 95.8027% ( 10) 00:07:43.346 8065.969 - 8116.382: 95.8435% ( 8) 00:07:43.346 8116.382 - 8166.794: 95.8946% ( 10) 00:07:43.346 8166.794 - 8217.206: 95.9610% ( 13) 00:07:43.346 8217.206 - 8267.618: 96.0172% ( 11) 00:07:43.346 8267.618 - 8318.031: 96.0682% ( 10) 00:07:43.346 8318.031 - 8368.443: 96.1295% ( 12) 00:07:43.346 8368.443 - 8418.855: 96.1959% ( 13) 00:07:43.346 8418.855 - 8469.268: 96.2623% ( 13) 00:07:43.346 8469.268 - 8519.680: 96.3286% ( 13) 00:07:43.346 8519.680 - 8570.092: 96.4001% ( 14) 00:07:43.346 8570.092 - 8620.505: 96.4614% ( 12) 00:07:43.346 8620.505 - 8670.917: 96.5227% ( 12) 00:07:43.346 8670.917 - 8721.329: 96.5942% ( 14) 00:07:43.346 8721.329 - 8771.742: 96.6708% ( 15) 00:07:43.346 8771.742 - 8822.154: 96.7525% ( 16) 00:07:43.346 8822.154 - 8872.566: 96.8188% ( 13) 00:07:43.346 8872.566 - 8922.978: 96.8801% ( 12) 00:07:43.346 8922.978 - 8973.391: 96.9465% ( 13) 00:07:43.346 8973.391 - 9023.803: 97.0282% ( 16) 00:07:43.346 9023.803 - 9074.215: 97.1048% ( 15) 00:07:43.346 9074.215 - 9124.628: 97.1763% ( 14) 00:07:43.346 9124.628 - 9175.040: 97.2580% ( 16) 00:07:43.346 9175.040 - 9225.452: 97.3550% ( 19) 00:07:43.346 9225.452 - 9275.865: 97.4418% ( 17) 00:07:43.346 9275.865 - 9326.277: 97.5643% ( 24) 00:07:43.346 9326.277 - 9376.689: 97.6716% ( 21) 00:07:43.346 9376.689 - 9427.102: 97.7737% ( 20) 00:07:43.346 9427.102 - 9477.514: 97.8554% ( 16) 00:07:43.346 9477.514 - 9527.926: 97.9320% ( 15) 00:07:43.346 9527.926 - 9578.338: 98.0137% ( 16) 00:07:43.346 9578.338 - 9628.751: 98.0852% ( 14) 00:07:43.346 9628.751 - 9679.163: 98.1567% ( 14) 00:07:43.346 9679.163 - 9729.575: 98.2179% ( 12) 00:07:43.346 9729.575 - 9779.988: 98.2741% ( 11) 00:07:43.346 9779.988 - 9830.400: 98.3303% ( 11) 00:07:43.346 9830.400 - 9880.812: 98.3967% ( 13) 00:07:43.346 9880.812 - 9931.225: 98.4477% ( 10) 00:07:43.346 9931.225 - 9981.637: 98.4988% ( 10) 00:07:43.346 9981.637 - 10032.049: 98.5396% ( 8) 00:07:43.346 10032.049 - 10082.462: 98.5856% ( 9) 00:07:43.346 10082.462 - 10132.874: 98.6264% ( 8) 00:07:43.346 10132.874 - 10183.286: 98.6571% ( 6) 00:07:43.346 10183.286 - 10233.698: 98.6673% ( 2) 00:07:43.346 10233.698 - 10284.111: 98.6928% ( 5) 00:07:43.346 10989.883 - 11040.295: 98.7030% ( 2) 00:07:43.346 11040.295 - 11090.708: 98.7081% ( 1) 00:07:43.346 11090.708 - 11141.120: 98.7183% ( 2) 00:07:43.346 11141.120 - 11191.532: 98.7337% ( 3) 00:07:43.346 11191.532 - 11241.945: 98.7439% ( 2) 00:07:43.346 11241.945 - 11292.357: 98.7541% ( 2) 00:07:43.346 11292.357 - 11342.769: 98.7694% ( 3) 00:07:43.346 11342.769 - 11393.182: 98.7898% ( 4) 00:07:43.346 11443.594 - 11494.006: 98.8051% ( 3) 00:07:43.346 11494.006 - 11544.418: 98.8154% ( 2) 00:07:43.346 11544.418 - 11594.831: 98.8256% ( 2) 00:07:43.346 11594.831 - 11645.243: 98.8358% ( 2) 00:07:43.346 11645.243 - 11695.655: 98.8409% ( 1) 00:07:43.346 11695.655 - 11746.068: 98.8511% ( 2) 00:07:43.346 11746.068 - 11796.480: 98.8766% ( 5) 00:07:43.346 11796.480 - 11846.892: 98.8971% ( 4) 00:07:43.346 11846.892 - 11897.305: 98.9277% ( 6) 00:07:43.346 11897.305 - 11947.717: 98.9430% ( 3) 00:07:43.346 11947.717 - 11998.129: 98.9685% ( 5) 00:07:43.346 11998.129 - 12048.542: 98.9941% ( 5) 00:07:43.346 12048.542 - 12098.954: 99.0145% ( 4) 00:07:43.346 12098.954 - 12149.366: 99.0400% ( 5) 00:07:43.346 12149.366 - 12199.778: 99.0554% ( 3) 00:07:43.346 12199.778 - 12250.191: 99.0758% ( 4) 00:07:43.346 12250.191 - 12300.603: 99.1013% ( 5) 00:07:43.346 12300.603 - 12351.015: 99.1319% ( 6) 00:07:43.346 12351.015 - 12401.428: 99.1371% ( 1) 00:07:43.346 12401.428 - 12451.840: 99.1626% ( 5) 00:07:43.346 12451.840 - 12502.252: 99.1728% ( 2) 00:07:43.346 12502.252 - 12552.665: 99.1881% ( 3) 00:07:43.346 12552.665 - 12603.077: 99.2034% ( 3) 00:07:43.346 12603.077 - 12653.489: 99.2188% ( 3) 00:07:43.346 12653.489 - 12703.902: 99.2290% ( 2) 00:07:43.346 12703.902 - 12754.314: 99.2392% ( 2) 00:07:43.346 12754.314 - 12804.726: 99.2545% ( 3) 00:07:43.346 12804.726 - 12855.138: 99.2647% ( 2) 00:07:43.346 12855.138 - 12905.551: 99.2749% ( 2) 00:07:43.346 12905.551 - 13006.375: 99.2953% ( 4) 00:07:43.346 13006.375 - 13107.200: 99.3209% ( 5) 00:07:43.346 13107.200 - 13208.025: 99.3413% ( 4) 00:07:43.346 13208.025 - 13308.849: 99.3464% ( 1) 00:07:43.346 19660.800 - 19761.625: 99.3515% ( 1) 00:07:43.346 19761.625 - 19862.449: 99.3566% ( 1) 00:07:43.346 19862.449 - 19963.274: 99.3770% ( 4) 00:07:43.346 19963.274 - 20064.098: 99.3924% ( 3) 00:07:43.346 20064.098 - 20164.923: 99.4077% ( 3) 00:07:43.346 20164.923 - 20265.748: 99.4281% ( 4) 00:07:43.346 20265.748 - 20366.572: 99.4485% ( 4) 00:07:43.346 20366.572 - 20467.397: 99.4741% ( 5) 00:07:43.346 20467.397 - 20568.222: 99.4894% ( 3) 00:07:43.346 20568.222 - 20669.046: 99.5098% ( 4) 00:07:43.346 20669.046 - 20769.871: 99.5353% ( 5) 00:07:43.346 20769.871 - 20870.695: 99.5558% ( 4) 00:07:43.346 20870.695 - 20971.520: 99.5711% ( 3) 00:07:43.346 20971.520 - 21072.345: 99.5915% ( 4) 00:07:43.346 21072.345 - 21173.169: 99.6119% ( 4) 00:07:43.346 21173.169 - 21273.994: 99.6324% ( 4) 00:07:43.346 21273.994 - 21374.818: 99.6528% ( 4) 00:07:43.346 21374.818 - 21475.643: 99.6732% ( 4) 00:07:43.346 25407.803 - 25508.628: 99.6834% ( 2) 00:07:43.346 25508.628 - 25609.452: 99.7038% ( 4) 00:07:43.346 25609.452 - 25710.277: 99.7192% ( 3) 00:07:43.346 25710.277 - 25811.102: 99.7345% ( 3) 00:07:43.346 25811.102 - 26012.751: 99.7651% ( 6) 00:07:43.346 26012.751 - 26214.400: 99.8009% ( 7) 00:07:43.346 26214.400 - 26416.049: 99.8315% ( 6) 00:07:43.346 26416.049 - 26617.698: 99.8621% ( 6) 00:07:43.346 26617.698 - 26819.348: 99.8979% ( 7) 00:07:43.346 26819.348 - 27020.997: 99.9285% ( 6) 00:07:43.346 27020.997 - 27222.646: 99.9592% ( 6) 00:07:43.346 27222.646 - 27424.295: 99.9898% ( 6) 00:07:43.346 27424.295 - 27625.945: 100.0000% ( 2) 00:07:43.346 00:07:43.346 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:43.346 ============================================================================== 00:07:43.346 Range in us Cumulative IO count 00:07:43.346 5696.591 - 5721.797: 0.0204% ( 4) 00:07:43.346 5721.797 - 5747.003: 0.0562% ( 7) 00:07:43.346 5747.003 - 5772.209: 0.1583% ( 20) 00:07:43.346 5772.209 - 5797.415: 0.4596% ( 59) 00:07:43.346 5797.415 - 5822.622: 1.0468% ( 115) 00:07:43.346 5822.622 - 5847.828: 1.9148% ( 170) 00:07:43.346 5847.828 - 5873.034: 3.2067% ( 253) 00:07:43.346 5873.034 - 5898.240: 4.7743% ( 307) 00:07:43.346 5898.240 - 5923.446: 6.6023% ( 358) 00:07:43.346 5923.446 - 5948.652: 8.7214% ( 415) 00:07:43.346 5948.652 - 5973.858: 11.2081% ( 487) 00:07:43.346 5973.858 - 5999.065: 13.8072% ( 509) 00:07:43.346 5999.065 - 6024.271: 16.4573% ( 519) 00:07:43.346 6024.271 - 6049.477: 18.9747% ( 493) 00:07:43.346 6049.477 - 6074.683: 21.6759% ( 529) 00:07:43.346 6074.683 - 6099.889: 24.4077% ( 535) 00:07:43.346 6099.889 - 6125.095: 27.0782% ( 523) 00:07:43.346 6125.095 - 6150.302: 29.7539% ( 524) 00:07:43.346 6150.302 - 6175.508: 32.4908% ( 536) 00:07:43.346 6175.508 - 6200.714: 35.2277% ( 536) 00:07:43.346 6200.714 - 6225.920: 38.0413% ( 551) 00:07:43.346 6225.920 - 6251.126: 40.8292% ( 546) 00:07:43.346 6251.126 - 6276.332: 43.5917% ( 541) 00:07:43.346 6276.332 - 6301.538: 46.2827% ( 527) 00:07:43.346 6301.538 - 6326.745: 49.0707% ( 546) 00:07:43.346 6326.745 - 6351.951: 51.8893% ( 552) 00:07:43.346 6351.951 - 6377.157: 54.6671% ( 544) 00:07:43.346 6377.157 - 6402.363: 57.4806% ( 551) 00:07:43.346 6402.363 - 6427.569: 60.2328% ( 539) 00:07:43.346 6427.569 - 6452.775: 62.9953% ( 541) 00:07:43.346 6452.775 - 6503.188: 68.4998% ( 1078) 00:07:43.346 6503.188 - 6553.600: 74.0196% ( 1081) 00:07:43.346 6553.600 - 6604.012: 79.4526% ( 1064) 00:07:43.346 6604.012 - 6654.425: 84.4669% ( 982) 00:07:43.346 6654.425 - 6704.837: 88.2812% ( 747) 00:07:43.346 6704.837 - 6755.249: 90.8905% ( 511) 00:07:43.347 6755.249 - 6805.662: 92.1824% ( 253) 00:07:43.347 6805.662 - 6856.074: 92.9126% ( 143) 00:07:43.347 6856.074 - 6906.486: 93.4130% ( 98) 00:07:43.347 6906.486 - 6956.898: 93.7092% ( 58) 00:07:43.347 6956.898 - 7007.311: 93.9951% ( 56) 00:07:43.347 7007.311 - 7057.723: 94.2555% ( 51) 00:07:43.347 7057.723 - 7108.135: 94.4649% ( 41) 00:07:43.347 7108.135 - 7158.548: 94.6487% ( 36) 00:07:43.347 7158.548 - 7208.960: 94.7917% ( 28) 00:07:43.347 7208.960 - 7259.372: 94.8989% ( 21) 00:07:43.347 7259.372 - 7309.785: 95.0061% ( 21) 00:07:43.347 7309.785 - 7360.197: 95.0725% ( 13) 00:07:43.347 7360.197 - 7410.609: 95.1287% ( 11) 00:07:43.347 7410.609 - 7461.022: 95.1695% ( 8) 00:07:43.347 7461.022 - 7511.434: 95.2104% ( 8) 00:07:43.347 7511.434 - 7561.846: 95.2665% ( 11) 00:07:43.347 7561.846 - 7612.258: 95.3227% ( 11) 00:07:43.347 7612.258 - 7662.671: 95.3687% ( 9) 00:07:43.347 7662.671 - 7713.083: 95.4146% ( 9) 00:07:43.347 7713.083 - 7763.495: 95.4861% ( 14) 00:07:43.347 7763.495 - 7813.908: 95.5474% ( 12) 00:07:43.347 7813.908 - 7864.320: 95.6189% ( 14) 00:07:43.347 7864.320 - 7914.732: 95.6750% ( 11) 00:07:43.347 7914.732 - 7965.145: 95.7567% ( 16) 00:07:43.347 7965.145 - 8015.557: 95.8282% ( 14) 00:07:43.347 8015.557 - 8065.969: 95.9099% ( 16) 00:07:43.347 8065.969 - 8116.382: 95.9916% ( 16) 00:07:43.347 8116.382 - 8166.794: 96.0580% ( 13) 00:07:43.347 8166.794 - 8217.206: 96.1244% ( 13) 00:07:43.347 8217.206 - 8267.618: 96.1806% ( 11) 00:07:43.347 8267.618 - 8318.031: 96.2469% ( 13) 00:07:43.347 8318.031 - 8368.443: 96.3133% ( 13) 00:07:43.347 8368.443 - 8418.855: 96.3746% ( 12) 00:07:43.347 8418.855 - 8469.268: 96.4205% ( 9) 00:07:43.347 8469.268 - 8519.680: 96.4767% ( 11) 00:07:43.347 8519.680 - 8570.092: 96.5380% ( 12) 00:07:43.347 8570.092 - 8620.505: 96.6095% ( 14) 00:07:43.347 8620.505 - 8670.917: 96.6810% ( 14) 00:07:43.347 8670.917 - 8721.329: 96.7576% ( 15) 00:07:43.347 8721.329 - 8771.742: 96.8393% ( 16) 00:07:43.347 8771.742 - 8822.154: 96.9107% ( 14) 00:07:43.347 8822.154 - 8872.566: 96.9771% ( 13) 00:07:43.347 8872.566 - 8922.978: 97.0486% ( 14) 00:07:43.347 8922.978 - 8973.391: 97.1201% ( 14) 00:07:43.347 8973.391 - 9023.803: 97.1916% ( 14) 00:07:43.347 9023.803 - 9074.215: 97.2682% ( 15) 00:07:43.347 9074.215 - 9124.628: 97.3448% ( 15) 00:07:43.347 9124.628 - 9175.040: 97.4265% ( 16) 00:07:43.347 9175.040 - 9225.452: 97.5031% ( 15) 00:07:43.347 9225.452 - 9275.865: 97.5848% ( 16) 00:07:43.347 9275.865 - 9326.277: 97.6614% ( 15) 00:07:43.347 9326.277 - 9376.689: 97.7277% ( 13) 00:07:43.347 9376.689 - 9427.102: 97.7839% ( 11) 00:07:43.347 9427.102 - 9477.514: 97.8401% ( 11) 00:07:43.347 9477.514 - 9527.926: 97.9116% ( 14) 00:07:43.347 9527.926 - 9578.338: 97.9677% ( 11) 00:07:43.347 9578.338 - 9628.751: 98.0137% ( 9) 00:07:43.347 9628.751 - 9679.163: 98.0494% ( 7) 00:07:43.347 9679.163 - 9729.575: 98.0903% ( 8) 00:07:43.347 9729.575 - 9779.988: 98.1260% ( 7) 00:07:43.347 9779.988 - 9830.400: 98.1771% ( 10) 00:07:43.347 9830.400 - 9880.812: 98.2128% ( 7) 00:07:43.347 9880.812 - 9931.225: 98.2486% ( 7) 00:07:43.347 9931.225 - 9981.637: 98.2843% ( 7) 00:07:43.347 9981.637 - 10032.049: 98.3150% ( 6) 00:07:43.347 10032.049 - 10082.462: 98.3558% ( 8) 00:07:43.347 10082.462 - 10132.874: 98.3967% ( 8) 00:07:43.347 10132.874 - 10183.286: 98.4375% ( 8) 00:07:43.347 10183.286 - 10233.698: 98.4783% ( 8) 00:07:43.347 10233.698 - 10284.111: 98.5192% ( 8) 00:07:43.347 10284.111 - 10334.523: 98.5549% ( 7) 00:07:43.347 10334.523 - 10384.935: 98.5754% ( 4) 00:07:43.347 10384.935 - 10435.348: 98.5958% ( 4) 00:07:43.347 10435.348 - 10485.760: 98.6162% ( 4) 00:07:43.347 10485.760 - 10536.172: 98.6315% ( 3) 00:07:43.347 10536.172 - 10586.585: 98.6520% ( 4) 00:07:43.347 10586.585 - 10636.997: 98.6724% ( 4) 00:07:43.347 10636.997 - 10687.409: 98.6928% ( 4) 00:07:43.347 11090.708 - 11141.120: 98.6979% ( 1) 00:07:43.347 11141.120 - 11191.532: 98.7081% ( 2) 00:07:43.347 11191.532 - 11241.945: 98.7183% ( 2) 00:07:43.347 11241.945 - 11292.357: 98.7337% ( 3) 00:07:43.347 11292.357 - 11342.769: 98.7439% ( 2) 00:07:43.347 11342.769 - 11393.182: 98.7643% ( 4) 00:07:43.347 11393.182 - 11443.594: 98.7796% ( 3) 00:07:43.347 11443.594 - 11494.006: 98.8000% ( 4) 00:07:43.347 11494.006 - 11544.418: 98.8256% ( 5) 00:07:43.347 11544.418 - 11594.831: 98.8562% ( 6) 00:07:43.347 11594.831 - 11645.243: 98.8817% ( 5) 00:07:43.347 11645.243 - 11695.655: 98.9124% ( 6) 00:07:43.347 11695.655 - 11746.068: 98.9379% ( 5) 00:07:43.347 11746.068 - 11796.480: 98.9634% ( 5) 00:07:43.347 11796.480 - 11846.892: 98.9890% ( 5) 00:07:43.347 11846.892 - 11897.305: 99.0145% ( 5) 00:07:43.347 11897.305 - 11947.717: 99.0400% ( 5) 00:07:43.347 11947.717 - 11998.129: 99.0707% ( 6) 00:07:43.347 11998.129 - 12048.542: 99.0962% ( 5) 00:07:43.347 12048.542 - 12098.954: 99.1268% ( 6) 00:07:43.347 12098.954 - 12149.366: 99.1473% ( 4) 00:07:43.347 12149.366 - 12199.778: 99.1779% ( 6) 00:07:43.347 12199.778 - 12250.191: 99.2034% ( 5) 00:07:43.347 12250.191 - 12300.603: 99.2290% ( 5) 00:07:43.347 12300.603 - 12351.015: 99.2596% ( 6) 00:07:43.347 12351.015 - 12401.428: 99.2698% ( 2) 00:07:43.347 12401.428 - 12451.840: 99.2851% ( 3) 00:07:43.347 12451.840 - 12502.252: 99.2953% ( 2) 00:07:43.347 12502.252 - 12552.665: 99.3107% ( 3) 00:07:43.347 12552.665 - 12603.077: 99.3260% ( 3) 00:07:43.347 12603.077 - 12653.489: 99.3413% ( 3) 00:07:43.347 12653.489 - 12703.902: 99.3464% ( 1) 00:07:43.347 18450.905 - 18551.729: 99.3515% ( 1) 00:07:43.347 18551.729 - 18652.554: 99.3719% ( 4) 00:07:43.347 18652.554 - 18753.378: 99.3924% ( 4) 00:07:43.347 18753.378 - 18854.203: 99.4128% ( 4) 00:07:43.347 18854.203 - 18955.028: 99.4332% ( 4) 00:07:43.347 18955.028 - 19055.852: 99.4587% ( 5) 00:07:43.347 19055.852 - 19156.677: 99.4741% ( 3) 00:07:43.347 19156.677 - 19257.502: 99.4996% ( 5) 00:07:43.347 19257.502 - 19358.326: 99.5200% ( 4) 00:07:43.347 19358.326 - 19459.151: 99.5404% ( 4) 00:07:43.347 19459.151 - 19559.975: 99.5609% ( 4) 00:07:43.347 19559.975 - 19660.800: 99.5813% ( 4) 00:07:43.347 19660.800 - 19761.625: 99.6068% ( 5) 00:07:43.347 19761.625 - 19862.449: 99.6272% ( 4) 00:07:43.347 19862.449 - 19963.274: 99.6426% ( 3) 00:07:43.347 19963.274 - 20064.098: 99.6681% ( 5) 00:07:43.347 20064.098 - 20164.923: 99.6732% ( 1) 00:07:43.347 24097.083 - 24197.908: 99.6783% ( 1) 00:07:43.347 24197.908 - 24298.732: 99.6936% ( 3) 00:07:43.347 24298.732 - 24399.557: 99.7141% ( 4) 00:07:43.347 24399.557 - 24500.382: 99.7294% ( 3) 00:07:43.347 24500.382 - 24601.206: 99.7498% ( 4) 00:07:43.347 24601.206 - 24702.031: 99.7600% ( 2) 00:07:43.347 24702.031 - 24802.855: 99.7753% ( 3) 00:07:43.347 24802.855 - 24903.680: 99.7958% ( 4) 00:07:43.347 24903.680 - 25004.505: 99.8111% ( 3) 00:07:43.347 25004.505 - 25105.329: 99.8315% ( 4) 00:07:43.347 25105.329 - 25206.154: 99.8468% ( 3) 00:07:43.347 25206.154 - 25306.978: 99.8672% ( 4) 00:07:43.347 25306.978 - 25407.803: 99.8826% ( 3) 00:07:43.347 25407.803 - 25508.628: 99.8979% ( 3) 00:07:43.347 25508.628 - 25609.452: 99.9183% ( 4) 00:07:43.347 25609.452 - 25710.277: 99.9336% ( 3) 00:07:43.347 25710.277 - 25811.102: 99.9540% ( 4) 00:07:43.347 25811.102 - 26012.751: 99.9898% ( 7) 00:07:43.347 26012.751 - 26214.400: 100.0000% ( 2) 00:07:43.347 00:07:43.347 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:43.347 ============================================================================== 00:07:43.347 Range in us Cumulative IO count 00:07:43.347 5696.591 - 5721.797: 0.0051% ( 1) 00:07:43.347 5721.797 - 5747.003: 0.0562% ( 10) 00:07:43.347 5747.003 - 5772.209: 0.1328% ( 15) 00:07:43.347 5772.209 - 5797.415: 0.3268% ( 38) 00:07:43.347 5797.415 - 5822.622: 0.8272% ( 98) 00:07:43.347 5822.622 - 5847.828: 1.6340% ( 158) 00:07:43.347 5847.828 - 5873.034: 2.8646% ( 241) 00:07:43.347 5873.034 - 5898.240: 4.3505% ( 291) 00:07:43.347 5898.240 - 5923.446: 6.3011% ( 382) 00:07:43.347 5923.446 - 5948.652: 8.8491% ( 499) 00:07:43.347 5948.652 - 5973.858: 11.2490% ( 470) 00:07:43.347 5973.858 - 5999.065: 13.7204% ( 484) 00:07:43.347 5999.065 - 6024.271: 16.3552% ( 516) 00:07:43.347 6024.271 - 6049.477: 19.1023% ( 538) 00:07:43.347 6049.477 - 6074.683: 21.8342% ( 535) 00:07:43.347 6074.683 - 6099.889: 24.5609% ( 534) 00:07:43.347 6099.889 - 6125.095: 27.2110% ( 519) 00:07:43.347 6125.095 - 6150.302: 29.9683% ( 540) 00:07:43.347 6150.302 - 6175.508: 32.7614% ( 547) 00:07:43.347 6175.508 - 6200.714: 35.5137% ( 539) 00:07:43.347 6200.714 - 6225.920: 38.3323% ( 552) 00:07:43.347 6225.920 - 6251.126: 41.1560% ( 553) 00:07:43.347 6251.126 - 6276.332: 43.9287% ( 543) 00:07:43.347 6276.332 - 6301.538: 46.7371% ( 550) 00:07:43.347 6301.538 - 6326.745: 49.5149% ( 544) 00:07:43.347 6326.745 - 6351.951: 52.3233% ( 550) 00:07:43.347 6351.951 - 6377.157: 55.1113% ( 546) 00:07:43.347 6377.157 - 6402.363: 57.8789% ( 542) 00:07:43.347 6402.363 - 6427.569: 60.6618% ( 545) 00:07:43.347 6427.569 - 6452.775: 63.4140% ( 539) 00:07:43.347 6452.775 - 6503.188: 68.9491% ( 1084) 00:07:43.347 6503.188 - 6553.600: 74.5149% ( 1090) 00:07:43.347 6553.600 - 6604.012: 79.8969% ( 1054) 00:07:43.347 6604.012 - 6654.425: 84.9009% ( 980) 00:07:43.347 6654.425 - 6704.837: 88.8021% ( 764) 00:07:43.347 6704.837 - 6755.249: 91.3041% ( 490) 00:07:43.347 6755.249 - 6805.662: 92.6317% ( 260) 00:07:43.348 6805.662 - 6856.074: 93.2700% ( 125) 00:07:43.348 6856.074 - 6906.486: 93.6275% ( 70) 00:07:43.348 6906.486 - 6956.898: 93.8828% ( 50) 00:07:43.348 6956.898 - 7007.311: 94.0921% ( 41) 00:07:43.348 7007.311 - 7057.723: 94.2810% ( 37) 00:07:43.348 7057.723 - 7108.135: 94.4547% ( 34) 00:07:43.348 7108.135 - 7158.548: 94.6538% ( 39) 00:07:43.348 7158.548 - 7208.960: 94.7917% ( 27) 00:07:43.348 7208.960 - 7259.372: 94.9295% ( 27) 00:07:43.348 7259.372 - 7309.785: 95.0470% ( 23) 00:07:43.348 7309.785 - 7360.197: 95.1491% ( 20) 00:07:43.348 7360.197 - 7410.609: 95.2512% ( 20) 00:07:43.348 7410.609 - 7461.022: 95.3227% ( 14) 00:07:43.348 7461.022 - 7511.434: 95.3840% ( 12) 00:07:43.348 7511.434 - 7561.846: 95.4453% ( 12) 00:07:43.348 7561.846 - 7612.258: 95.5065% ( 12) 00:07:43.348 7612.258 - 7662.671: 95.5729% ( 13) 00:07:43.348 7662.671 - 7713.083: 95.6597% ( 17) 00:07:43.348 7713.083 - 7763.495: 95.7465% ( 17) 00:07:43.348 7763.495 - 7813.908: 95.8282% ( 16) 00:07:43.348 7813.908 - 7864.320: 95.9099% ( 16) 00:07:43.348 7864.320 - 7914.732: 95.9610% ( 10) 00:07:43.348 7914.732 - 7965.145: 96.0223% ( 12) 00:07:43.348 7965.145 - 8015.557: 96.0631% ( 8) 00:07:43.348 8015.557 - 8065.969: 96.1244% ( 12) 00:07:43.348 8065.969 - 8116.382: 96.1652% ( 8) 00:07:43.348 8116.382 - 8166.794: 96.2214% ( 11) 00:07:43.348 8166.794 - 8217.206: 96.2878% ( 13) 00:07:43.348 8217.206 - 8267.618: 96.3491% ( 12) 00:07:43.348 8267.618 - 8318.031: 96.4052% ( 11) 00:07:43.348 8318.031 - 8368.443: 96.4614% ( 11) 00:07:43.348 8368.443 - 8418.855: 96.5176% ( 11) 00:07:43.348 8418.855 - 8469.268: 96.5839% ( 13) 00:07:43.348 8469.268 - 8519.680: 96.6708% ( 17) 00:07:43.348 8519.680 - 8570.092: 96.7780% ( 21) 00:07:43.348 8570.092 - 8620.505: 96.8597% ( 16) 00:07:43.348 8620.505 - 8670.917: 96.9567% ( 19) 00:07:43.348 8670.917 - 8721.329: 97.0180% ( 12) 00:07:43.348 8721.329 - 8771.742: 97.0997% ( 16) 00:07:43.348 8771.742 - 8822.154: 97.1456% ( 9) 00:07:43.348 8822.154 - 8872.566: 97.2273% ( 16) 00:07:43.348 8872.566 - 8922.978: 97.2988% ( 14) 00:07:43.348 8922.978 - 8973.391: 97.3397% ( 8) 00:07:43.348 8973.391 - 9023.803: 97.3703% ( 6) 00:07:43.348 9023.803 - 9074.215: 97.4060% ( 7) 00:07:43.348 9074.215 - 9124.628: 97.4418% ( 7) 00:07:43.348 9124.628 - 9175.040: 97.4826% ( 8) 00:07:43.348 9175.040 - 9225.452: 97.5235% ( 8) 00:07:43.348 9225.452 - 9275.865: 97.5694% ( 9) 00:07:43.348 9275.865 - 9326.277: 97.6256% ( 11) 00:07:43.348 9326.277 - 9376.689: 97.6818% ( 11) 00:07:43.348 9376.689 - 9427.102: 97.7277% ( 9) 00:07:43.348 9427.102 - 9477.514: 97.7584% ( 6) 00:07:43.348 9477.514 - 9527.926: 97.7839% ( 5) 00:07:43.348 9527.926 - 9578.338: 97.8145% ( 6) 00:07:43.348 9578.338 - 9628.751: 97.8503% ( 7) 00:07:43.348 9628.751 - 9679.163: 97.8809% ( 6) 00:07:43.348 9679.163 - 9729.575: 97.8962% ( 3) 00:07:43.348 9729.575 - 9779.988: 97.9269% ( 6) 00:07:43.348 9779.988 - 9830.400: 97.9677% ( 8) 00:07:43.348 9830.400 - 9880.812: 98.0035% ( 7) 00:07:43.348 9880.812 - 9931.225: 98.0443% ( 8) 00:07:43.348 9931.225 - 9981.637: 98.0852% ( 8) 00:07:43.348 9981.637 - 10032.049: 98.1209% ( 7) 00:07:43.348 10032.049 - 10082.462: 98.1618% ( 8) 00:07:43.348 10082.462 - 10132.874: 98.1873% ( 5) 00:07:43.348 10132.874 - 10183.286: 98.2077% ( 4) 00:07:43.348 10183.286 - 10233.698: 98.2281% ( 4) 00:07:43.348 10233.698 - 10284.111: 98.2486% ( 4) 00:07:43.348 10284.111 - 10334.523: 98.2639% ( 3) 00:07:43.348 10334.523 - 10384.935: 98.2894% ( 5) 00:07:43.348 10384.935 - 10435.348: 98.3201% ( 6) 00:07:43.348 10435.348 - 10485.760: 98.3456% ( 5) 00:07:43.348 10485.760 - 10536.172: 98.3762% ( 6) 00:07:43.348 10536.172 - 10586.585: 98.4069% ( 6) 00:07:43.348 10586.585 - 10636.997: 98.4222% ( 3) 00:07:43.348 10636.997 - 10687.409: 98.4324% ( 2) 00:07:43.348 10687.409 - 10737.822: 98.4426% ( 2) 00:07:43.348 10737.822 - 10788.234: 98.4528% ( 2) 00:07:43.348 10788.234 - 10838.646: 98.4732% ( 4) 00:07:43.348 10838.646 - 10889.058: 98.5039% ( 6) 00:07:43.348 10889.058 - 10939.471: 98.5447% ( 8) 00:07:43.348 10939.471 - 10989.883: 98.5805% ( 7) 00:07:43.348 10989.883 - 11040.295: 98.6111% ( 6) 00:07:43.348 11040.295 - 11090.708: 98.6417% ( 6) 00:07:43.348 11090.708 - 11141.120: 98.6775% ( 7) 00:07:43.348 11141.120 - 11191.532: 98.7132% ( 7) 00:07:43.348 11191.532 - 11241.945: 98.7439% ( 6) 00:07:43.348 11241.945 - 11292.357: 98.7796% ( 7) 00:07:43.348 11292.357 - 11342.769: 98.8154% ( 7) 00:07:43.348 11342.769 - 11393.182: 98.8409% ( 5) 00:07:43.348 11393.182 - 11443.594: 98.8664% ( 5) 00:07:43.348 11443.594 - 11494.006: 98.8766% ( 2) 00:07:43.348 11494.006 - 11544.418: 98.8868% ( 2) 00:07:43.348 11544.418 - 11594.831: 98.9022% ( 3) 00:07:43.348 11594.831 - 11645.243: 98.9175% ( 3) 00:07:43.348 11645.243 - 11695.655: 98.9328% ( 3) 00:07:43.348 11695.655 - 11746.068: 98.9481% ( 3) 00:07:43.348 11746.068 - 11796.480: 98.9583% ( 2) 00:07:43.348 11796.480 - 11846.892: 98.9788% ( 4) 00:07:43.348 11846.892 - 11897.305: 98.9992% ( 4) 00:07:43.348 11897.305 - 11947.717: 99.0247% ( 5) 00:07:43.348 11947.717 - 11998.129: 99.0400% ( 3) 00:07:43.348 11998.129 - 12048.542: 99.0707% ( 6) 00:07:43.348 12048.542 - 12098.954: 99.0911% ( 4) 00:07:43.348 12098.954 - 12149.366: 99.1166% ( 5) 00:07:43.348 12149.366 - 12199.778: 99.1319% ( 3) 00:07:43.348 12199.778 - 12250.191: 99.1524% ( 4) 00:07:43.348 12250.191 - 12300.603: 99.1677% ( 3) 00:07:43.348 12300.603 - 12351.015: 99.1881% ( 4) 00:07:43.348 12351.015 - 12401.428: 99.2085% ( 4) 00:07:43.348 12401.428 - 12451.840: 99.2290% ( 4) 00:07:43.348 12451.840 - 12502.252: 99.2494% ( 4) 00:07:43.348 12502.252 - 12552.665: 99.2647% ( 3) 00:07:43.348 12552.665 - 12603.077: 99.2851% ( 4) 00:07:43.348 12603.077 - 12653.489: 99.3004% ( 3) 00:07:43.348 12653.489 - 12703.902: 99.3209% ( 4) 00:07:43.348 12703.902 - 12754.314: 99.3413% ( 4) 00:07:43.348 12754.314 - 12804.726: 99.3464% ( 1) 00:07:43.348 17140.185 - 17241.009: 99.3515% ( 1) 00:07:43.348 17241.009 - 17341.834: 99.3719% ( 4) 00:07:43.348 17341.834 - 17442.658: 99.3975% ( 5) 00:07:43.348 17442.658 - 17543.483: 99.4179% ( 4) 00:07:43.348 17543.483 - 17644.308: 99.4383% ( 4) 00:07:43.348 17644.308 - 17745.132: 99.4587% ( 4) 00:07:43.348 17745.132 - 17845.957: 99.4792% ( 4) 00:07:43.348 17845.957 - 17946.782: 99.4996% ( 4) 00:07:43.348 17946.782 - 18047.606: 99.5200% ( 4) 00:07:43.348 18047.606 - 18148.431: 99.5404% ( 4) 00:07:43.348 18148.431 - 18249.255: 99.5609% ( 4) 00:07:43.348 18249.255 - 18350.080: 99.5813% ( 4) 00:07:43.348 18350.080 - 18450.905: 99.6017% ( 4) 00:07:43.348 18450.905 - 18551.729: 99.6221% ( 4) 00:07:43.348 18551.729 - 18652.554: 99.6426% ( 4) 00:07:43.348 18652.554 - 18753.378: 99.6630% ( 4) 00:07:43.348 18753.378 - 18854.203: 99.6732% ( 2) 00:07:43.348 23290.486 - 23391.311: 99.6783% ( 1) 00:07:43.348 23391.311 - 23492.135: 99.6987% ( 4) 00:07:43.348 23492.135 - 23592.960: 99.7141% ( 3) 00:07:43.348 23592.960 - 23693.785: 99.7243% ( 2) 00:07:43.348 23693.785 - 23794.609: 99.7447% ( 4) 00:07:43.348 23794.609 - 23895.434: 99.7600% ( 3) 00:07:43.348 23895.434 - 23996.258: 99.7753% ( 3) 00:07:43.348 23996.258 - 24097.083: 99.7958% ( 4) 00:07:43.348 24097.083 - 24197.908: 99.8111% ( 3) 00:07:43.348 24197.908 - 24298.732: 99.8264% ( 3) 00:07:43.348 24298.732 - 24399.557: 99.8417% ( 3) 00:07:43.348 24399.557 - 24500.382: 99.8570% ( 3) 00:07:43.348 24500.382 - 24601.206: 99.8775% ( 4) 00:07:43.348 24601.206 - 24702.031: 99.8928% ( 3) 00:07:43.348 24702.031 - 24802.855: 99.9132% ( 4) 00:07:43.348 24802.855 - 24903.680: 99.9285% ( 3) 00:07:43.348 24903.680 - 25004.505: 99.9438% ( 3) 00:07:43.348 25004.505 - 25105.329: 99.9643% ( 4) 00:07:43.348 25105.329 - 25206.154: 99.9796% ( 3) 00:07:43.348 25206.154 - 25306.978: 100.0000% ( 4) 00:07:43.348 00:07:43.348 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:43.348 ============================================================================== 00:07:43.348 Range in us Cumulative IO count 00:07:43.348 5721.797 - 5747.003: 0.0306% ( 6) 00:07:43.348 5747.003 - 5772.209: 0.1379% ( 21) 00:07:43.348 5772.209 - 5797.415: 0.3983% ( 51) 00:07:43.348 5797.415 - 5822.622: 0.9446% ( 107) 00:07:43.348 5822.622 - 5847.828: 1.7106% ( 150) 00:07:43.348 5847.828 - 5873.034: 2.8952% ( 232) 00:07:43.348 5873.034 - 5898.240: 4.5037% ( 315) 00:07:43.348 5898.240 - 5923.446: 6.4236% ( 376) 00:07:43.348 5923.446 - 5948.652: 8.5274% ( 412) 00:07:43.348 5948.652 - 5973.858: 11.0958% ( 503) 00:07:43.348 5973.858 - 5999.065: 13.6744% ( 505) 00:07:43.348 5999.065 - 6024.271: 16.3603% ( 526) 00:07:43.348 6024.271 - 6049.477: 19.1381% ( 544) 00:07:43.348 6049.477 - 6074.683: 21.9771% ( 556) 00:07:43.348 6074.683 - 6099.889: 24.7038% ( 534) 00:07:43.348 6099.889 - 6125.095: 27.4050% ( 529) 00:07:43.348 6125.095 - 6150.302: 30.1675% ( 541) 00:07:43.348 6150.302 - 6175.508: 32.8278% ( 521) 00:07:43.348 6175.508 - 6200.714: 35.5239% ( 528) 00:07:43.348 6200.714 - 6225.920: 38.2710% ( 538) 00:07:43.348 6225.920 - 6251.126: 41.1050% ( 555) 00:07:43.348 6251.126 - 6276.332: 43.8674% ( 541) 00:07:43.348 6276.332 - 6301.538: 46.6095% ( 537) 00:07:43.348 6301.538 - 6326.745: 49.3770% ( 542) 00:07:43.348 6326.745 - 6351.951: 52.1906% ( 551) 00:07:43.348 6351.951 - 6377.157: 54.9377% ( 538) 00:07:43.348 6377.157 - 6402.363: 57.6593% ( 533) 00:07:43.349 6402.363 - 6427.569: 60.4728% ( 551) 00:07:43.349 6427.569 - 6452.775: 63.2506% ( 544) 00:07:43.349 6452.775 - 6503.188: 68.7347% ( 1074) 00:07:43.349 6503.188 - 6553.600: 74.3158% ( 1093) 00:07:43.349 6553.600 - 6604.012: 79.7283% ( 1060) 00:07:43.349 6604.012 - 6654.425: 84.6865% ( 971) 00:07:43.349 6654.425 - 6704.837: 88.5825% ( 763) 00:07:43.349 6704.837 - 6755.249: 91.1509% ( 503) 00:07:43.349 6755.249 - 6805.662: 92.4837% ( 261) 00:07:43.349 6805.662 - 6856.074: 93.1117% ( 123) 00:07:43.349 6856.074 - 6906.486: 93.4947% ( 75) 00:07:43.349 6906.486 - 6956.898: 93.8062% ( 61) 00:07:43.349 6956.898 - 7007.311: 94.0666% ( 51) 00:07:43.349 7007.311 - 7057.723: 94.3219% ( 50) 00:07:43.349 7057.723 - 7108.135: 94.5210% ( 39) 00:07:43.349 7108.135 - 7158.548: 94.7100% ( 37) 00:07:43.349 7158.548 - 7208.960: 94.8376% ( 25) 00:07:43.349 7208.960 - 7259.372: 94.9397% ( 20) 00:07:43.349 7259.372 - 7309.785: 95.0214% ( 16) 00:07:43.349 7309.785 - 7360.197: 95.0878% ( 13) 00:07:43.349 7360.197 - 7410.609: 95.1542% ( 13) 00:07:43.349 7410.609 - 7461.022: 95.2002% ( 9) 00:07:43.349 7461.022 - 7511.434: 95.2870% ( 17) 00:07:43.349 7511.434 - 7561.846: 95.3585% ( 14) 00:07:43.349 7561.846 - 7612.258: 95.4402% ( 16) 00:07:43.349 7612.258 - 7662.671: 95.5321% ( 18) 00:07:43.349 7662.671 - 7713.083: 95.6036% ( 14) 00:07:43.349 7713.083 - 7763.495: 95.6597% ( 11) 00:07:43.349 7763.495 - 7813.908: 95.7363% ( 15) 00:07:43.349 7813.908 - 7864.320: 95.8027% ( 13) 00:07:43.349 7864.320 - 7914.732: 95.8742% ( 14) 00:07:43.349 7914.732 - 7965.145: 95.9508% ( 15) 00:07:43.349 7965.145 - 8015.557: 96.0274% ( 15) 00:07:43.349 8015.557 - 8065.969: 96.0989% ( 14) 00:07:43.349 8065.969 - 8116.382: 96.1703% ( 14) 00:07:43.349 8116.382 - 8166.794: 96.2418% ( 14) 00:07:43.349 8166.794 - 8217.206: 96.3235% ( 16) 00:07:43.349 8217.206 - 8267.618: 96.3950% ( 14) 00:07:43.349 8267.618 - 8318.031: 96.4716% ( 15) 00:07:43.349 8318.031 - 8368.443: 96.5482% ( 15) 00:07:43.349 8368.443 - 8418.855: 96.6248% ( 15) 00:07:43.349 8418.855 - 8469.268: 96.6861% ( 12) 00:07:43.349 8469.268 - 8519.680: 96.7371% ( 10) 00:07:43.349 8519.680 - 8570.092: 96.7882% ( 10) 00:07:43.349 8570.092 - 8620.505: 96.8495% ( 12) 00:07:43.349 8620.505 - 8670.917: 96.9005% ( 10) 00:07:43.349 8670.917 - 8721.329: 96.9567% ( 11) 00:07:43.349 8721.329 - 8771.742: 97.0486% ( 18) 00:07:43.349 8771.742 - 8822.154: 97.1201% ( 14) 00:07:43.349 8822.154 - 8872.566: 97.1712% ( 10) 00:07:43.349 8872.566 - 8922.978: 97.2273% ( 11) 00:07:43.349 8922.978 - 8973.391: 97.2886% ( 12) 00:07:43.349 8973.391 - 9023.803: 97.3499% ( 12) 00:07:43.349 9023.803 - 9074.215: 97.3907% ( 8) 00:07:43.349 9074.215 - 9124.628: 97.4367% ( 9) 00:07:43.349 9124.628 - 9175.040: 97.4775% ( 8) 00:07:43.349 9175.040 - 9225.452: 97.5133% ( 7) 00:07:43.349 9225.452 - 9275.865: 97.5541% ( 8) 00:07:43.349 9275.865 - 9326.277: 97.5950% ( 8) 00:07:43.349 9326.277 - 9376.689: 97.6409% ( 9) 00:07:43.349 9376.689 - 9427.102: 97.6920% ( 10) 00:07:43.349 9427.102 - 9477.514: 97.7379% ( 9) 00:07:43.349 9477.514 - 9527.926: 97.7737% ( 7) 00:07:43.349 9527.926 - 9578.338: 97.7992% ( 5) 00:07:43.349 9578.338 - 9628.751: 97.8196% ( 4) 00:07:43.349 9628.751 - 9679.163: 97.8401% ( 4) 00:07:43.349 9679.163 - 9729.575: 97.8605% ( 4) 00:07:43.349 9729.575 - 9779.988: 97.8962% ( 7) 00:07:43.349 9779.988 - 9830.400: 97.9269% ( 6) 00:07:43.349 9830.400 - 9880.812: 97.9728% ( 9) 00:07:43.349 9880.812 - 9931.225: 98.0137% ( 8) 00:07:43.349 9931.225 - 9981.637: 98.0494% ( 7) 00:07:43.349 9981.637 - 10032.049: 98.0903% ( 8) 00:07:43.349 10032.049 - 10082.462: 98.1311% ( 8) 00:07:43.349 10082.462 - 10132.874: 98.1720% ( 8) 00:07:43.349 10132.874 - 10183.286: 98.2128% ( 8) 00:07:43.349 10183.286 - 10233.698: 98.2435% ( 6) 00:07:43.349 10233.698 - 10284.111: 98.2843% ( 8) 00:07:43.349 10284.111 - 10334.523: 98.3252% ( 8) 00:07:43.349 10334.523 - 10384.935: 98.3558% ( 6) 00:07:43.349 10384.935 - 10435.348: 98.4069% ( 10) 00:07:43.349 10435.348 - 10485.760: 98.4630% ( 11) 00:07:43.349 10485.760 - 10536.172: 98.5141% ( 10) 00:07:43.349 10536.172 - 10586.585: 98.5600% ( 9) 00:07:43.349 10586.585 - 10636.997: 98.6009% ( 8) 00:07:43.349 10636.997 - 10687.409: 98.6264% ( 5) 00:07:43.349 10687.409 - 10737.822: 98.6622% ( 7) 00:07:43.349 10737.822 - 10788.234: 98.6979% ( 7) 00:07:43.349 10788.234 - 10838.646: 98.7337% ( 7) 00:07:43.349 10838.646 - 10889.058: 98.7694% ( 7) 00:07:43.349 10889.058 - 10939.471: 98.7949% ( 5) 00:07:43.349 10939.471 - 10989.883: 98.8256% ( 6) 00:07:43.349 10989.883 - 11040.295: 98.8613% ( 7) 00:07:43.349 11040.295 - 11090.708: 98.8817% ( 4) 00:07:43.349 11090.708 - 11141.120: 98.8971% ( 3) 00:07:43.349 11141.120 - 11191.532: 98.9124% ( 3) 00:07:43.349 11191.532 - 11241.945: 98.9277% ( 3) 00:07:43.349 11241.945 - 11292.357: 98.9430% ( 3) 00:07:43.349 11292.357 - 11342.769: 98.9532% ( 2) 00:07:43.349 11342.769 - 11393.182: 98.9634% ( 2) 00:07:43.349 11393.182 - 11443.594: 98.9788% ( 3) 00:07:43.349 11443.594 - 11494.006: 98.9941% ( 3) 00:07:43.349 11494.006 - 11544.418: 99.0043% ( 2) 00:07:43.349 11544.418 - 11594.831: 99.0196% ( 3) 00:07:43.349 12048.542 - 12098.954: 99.0247% ( 1) 00:07:43.349 12098.954 - 12149.366: 99.0400% ( 3) 00:07:43.349 12149.366 - 12199.778: 99.0605% ( 4) 00:07:43.349 12199.778 - 12250.191: 99.0758% ( 3) 00:07:43.349 12250.191 - 12300.603: 99.1064% ( 6) 00:07:43.349 12300.603 - 12351.015: 99.1217% ( 3) 00:07:43.349 12351.015 - 12401.428: 99.1422% ( 4) 00:07:43.349 12401.428 - 12451.840: 99.1626% ( 4) 00:07:43.349 12451.840 - 12502.252: 99.1779% ( 3) 00:07:43.349 12502.252 - 12552.665: 99.1983% ( 4) 00:07:43.349 12552.665 - 12603.077: 99.2188% ( 4) 00:07:43.349 12603.077 - 12653.489: 99.2341% ( 3) 00:07:43.349 12653.489 - 12703.902: 99.2545% ( 4) 00:07:43.349 12703.902 - 12754.314: 99.2749% ( 4) 00:07:43.349 12754.314 - 12804.726: 99.2953% ( 4) 00:07:43.349 12804.726 - 12855.138: 99.3158% ( 4) 00:07:43.349 12855.138 - 12905.551: 99.3311% ( 3) 00:07:43.349 12905.551 - 13006.375: 99.3464% ( 3) 00:07:43.349 15426.166 - 15526.991: 99.3566% ( 2) 00:07:43.349 15526.991 - 15627.815: 99.3770% ( 4) 00:07:43.349 15627.815 - 15728.640: 99.4026% ( 5) 00:07:43.349 15728.640 - 15829.465: 99.4230% ( 4) 00:07:43.349 15829.465 - 15930.289: 99.4434% ( 4) 00:07:43.349 15930.289 - 16031.114: 99.4690% ( 5) 00:07:43.349 16031.114 - 16131.938: 99.4894% ( 4) 00:07:43.349 16131.938 - 16232.763: 99.5098% ( 4) 00:07:43.349 16232.763 - 16333.588: 99.5302% ( 4) 00:07:43.349 16333.588 - 16434.412: 99.5507% ( 4) 00:07:43.349 16434.412 - 16535.237: 99.5711% ( 4) 00:07:43.349 16535.237 - 16636.062: 99.5966% ( 5) 00:07:43.349 16636.062 - 16736.886: 99.6170% ( 4) 00:07:43.349 16736.886 - 16837.711: 99.6375% ( 4) 00:07:43.349 16837.711 - 16938.535: 99.6579% ( 4) 00:07:43.349 16938.535 - 17039.360: 99.6732% ( 3) 00:07:43.349 21878.942 - 21979.766: 99.6783% ( 1) 00:07:43.349 21979.766 - 22080.591: 99.6936% ( 3) 00:07:43.349 22080.591 - 22181.415: 99.7089% ( 3) 00:07:43.349 22181.415 - 22282.240: 99.7294% ( 4) 00:07:43.349 22282.240 - 22383.065: 99.7447% ( 3) 00:07:43.349 22383.065 - 22483.889: 99.7600% ( 3) 00:07:43.349 22483.889 - 22584.714: 99.7804% ( 4) 00:07:43.349 22584.714 - 22685.538: 99.7958% ( 3) 00:07:43.349 22685.538 - 22786.363: 99.8162% ( 4) 00:07:43.349 22786.363 - 22887.188: 99.8315% ( 3) 00:07:43.349 22887.188 - 22988.012: 99.8468% ( 3) 00:07:43.349 22988.012 - 23088.837: 99.8672% ( 4) 00:07:43.349 23088.837 - 23189.662: 99.8826% ( 3) 00:07:43.349 23189.662 - 23290.486: 99.9030% ( 4) 00:07:43.349 23290.486 - 23391.311: 99.9183% ( 3) 00:07:43.349 23391.311 - 23492.135: 99.9285% ( 2) 00:07:43.349 23492.135 - 23592.960: 99.9438% ( 3) 00:07:43.349 23592.960 - 23693.785: 99.9592% ( 3) 00:07:43.349 23693.785 - 23794.609: 99.9745% ( 3) 00:07:43.349 23794.609 - 23895.434: 99.9949% ( 4) 00:07:43.349 23895.434 - 23996.258: 100.0000% ( 1) 00:07:43.349 00:07:43.349 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:43.349 ============================================================================== 00:07:43.349 Range in us Cumulative IO count 00:07:43.349 5721.797 - 5747.003: 0.0562% ( 11) 00:07:43.349 5747.003 - 5772.209: 0.1379% ( 16) 00:07:43.349 5772.209 - 5797.415: 0.3932% ( 50) 00:07:43.349 5797.415 - 5822.622: 0.9344% ( 106) 00:07:43.349 5822.622 - 5847.828: 1.7208% ( 154) 00:07:43.349 5847.828 - 5873.034: 2.8748% ( 226) 00:07:43.349 5873.034 - 5898.240: 4.4169% ( 302) 00:07:43.349 5898.240 - 5923.446: 6.2143% ( 352) 00:07:43.349 5923.446 - 5948.652: 8.7878% ( 504) 00:07:43.349 5948.652 - 5973.858: 11.4073% ( 513) 00:07:43.350 5973.858 - 5999.065: 14.0217% ( 512) 00:07:43.350 5999.065 - 6024.271: 16.6054% ( 506) 00:07:43.350 6024.271 - 6049.477: 19.1023% ( 489) 00:07:43.350 6049.477 - 6074.683: 21.6810% ( 505) 00:07:43.350 6074.683 - 6099.889: 24.3209% ( 517) 00:07:43.350 6099.889 - 6125.095: 27.0527% ( 535) 00:07:43.350 6125.095 - 6150.302: 29.7692% ( 532) 00:07:43.350 6150.302 - 6175.508: 32.5368% ( 542) 00:07:43.350 6175.508 - 6200.714: 35.3196% ( 545) 00:07:43.350 6200.714 - 6225.920: 38.1281% ( 550) 00:07:43.350 6225.920 - 6251.126: 40.9518% ( 553) 00:07:43.350 6251.126 - 6276.332: 43.6836% ( 535) 00:07:43.350 6276.332 - 6301.538: 46.4052% ( 533) 00:07:43.350 6301.538 - 6326.745: 49.1932% ( 546) 00:07:43.350 6326.745 - 6351.951: 51.9710% ( 544) 00:07:43.350 6351.951 - 6377.157: 54.7028% ( 535) 00:07:43.350 6377.157 - 6402.363: 57.4040% ( 529) 00:07:43.350 6402.363 - 6427.569: 60.2175% ( 551) 00:07:43.350 6427.569 - 6452.775: 62.9902% ( 543) 00:07:43.350 6452.775 - 6503.188: 68.5764% ( 1094) 00:07:43.350 6503.188 - 6553.600: 74.2239% ( 1106) 00:07:43.350 6553.600 - 6604.012: 79.6415% ( 1061) 00:07:43.350 6604.012 - 6654.425: 84.5844% ( 968) 00:07:43.350 6654.425 - 6704.837: 88.5774% ( 782) 00:07:43.350 6704.837 - 6755.249: 91.1101% ( 496) 00:07:43.350 6755.249 - 6805.662: 92.4122% ( 255) 00:07:43.350 6805.662 - 6856.074: 93.0658% ( 128) 00:07:43.350 6856.074 - 6906.486: 93.4998% ( 85) 00:07:43.350 6906.486 - 6956.898: 93.8470% ( 68) 00:07:43.350 6956.898 - 7007.311: 94.1279% ( 55) 00:07:43.350 7007.311 - 7057.723: 94.3781% ( 49) 00:07:43.350 7057.723 - 7108.135: 94.5976% ( 43) 00:07:43.350 7108.135 - 7158.548: 94.7406% ( 28) 00:07:43.350 7158.548 - 7208.960: 94.8580% ( 23) 00:07:43.350 7208.960 - 7259.372: 94.9551% ( 19) 00:07:43.350 7259.372 - 7309.785: 95.0674% ( 22) 00:07:43.350 7309.785 - 7360.197: 95.1542% ( 17) 00:07:43.350 7360.197 - 7410.609: 95.2104% ( 11) 00:07:43.350 7410.609 - 7461.022: 95.2665% ( 11) 00:07:43.350 7461.022 - 7511.434: 95.3431% ( 15) 00:07:43.350 7511.434 - 7561.846: 95.4197% ( 15) 00:07:43.350 7561.846 - 7612.258: 95.4912% ( 14) 00:07:43.350 7612.258 - 7662.671: 95.5627% ( 14) 00:07:43.350 7662.671 - 7713.083: 95.6240% ( 12) 00:07:43.350 7713.083 - 7763.495: 95.6801% ( 11) 00:07:43.350 7763.495 - 7813.908: 95.7618% ( 16) 00:07:43.350 7813.908 - 7864.320: 95.8282% ( 13) 00:07:43.350 7864.320 - 7914.732: 95.9048% ( 15) 00:07:43.350 7914.732 - 7965.145: 96.0018% ( 19) 00:07:43.350 7965.145 - 8015.557: 96.1040% ( 20) 00:07:43.350 8015.557 - 8065.969: 96.2163% ( 22) 00:07:43.350 8065.969 - 8116.382: 96.3082% ( 18) 00:07:43.350 8116.382 - 8166.794: 96.3797% ( 14) 00:07:43.350 8166.794 - 8217.206: 96.4461% ( 13) 00:07:43.350 8217.206 - 8267.618: 96.5176% ( 14) 00:07:43.350 8267.618 - 8318.031: 96.5788% ( 12) 00:07:43.350 8318.031 - 8368.443: 96.6248% ( 9) 00:07:43.350 8368.443 - 8418.855: 96.6708% ( 9) 00:07:43.350 8418.855 - 8469.268: 96.7116% ( 8) 00:07:43.350 8469.268 - 8519.680: 96.7576% ( 9) 00:07:43.350 8519.680 - 8570.092: 96.8239% ( 13) 00:07:43.350 8570.092 - 8620.505: 96.8750% ( 10) 00:07:43.350 8620.505 - 8670.917: 96.9107% ( 7) 00:07:43.350 8670.917 - 8721.329: 96.9618% ( 10) 00:07:43.350 8721.329 - 8771.742: 97.0129% ( 10) 00:07:43.350 8771.742 - 8822.154: 97.0537% ( 8) 00:07:43.350 8822.154 - 8872.566: 97.0997% ( 9) 00:07:43.350 8872.566 - 8922.978: 97.1405% ( 8) 00:07:43.350 8922.978 - 8973.391: 97.1814% ( 8) 00:07:43.350 8973.391 - 9023.803: 97.2273% ( 9) 00:07:43.350 9023.803 - 9074.215: 97.2682% ( 8) 00:07:43.350 9074.215 - 9124.628: 97.3039% ( 7) 00:07:43.350 9124.628 - 9175.040: 97.3397% ( 7) 00:07:43.350 9175.040 - 9225.452: 97.3805% ( 8) 00:07:43.350 9225.452 - 9275.865: 97.4009% ( 4) 00:07:43.350 9275.865 - 9326.277: 97.4316% ( 6) 00:07:43.350 9326.277 - 9376.689: 97.4826% ( 10) 00:07:43.350 9376.689 - 9427.102: 97.5184% ( 7) 00:07:43.350 9427.102 - 9477.514: 97.5643% ( 9) 00:07:43.350 9477.514 - 9527.926: 97.6103% ( 9) 00:07:43.350 9527.926 - 9578.338: 97.6511% ( 8) 00:07:43.350 9578.338 - 9628.751: 97.7022% ( 10) 00:07:43.350 9628.751 - 9679.163: 97.7482% ( 9) 00:07:43.350 9679.163 - 9729.575: 97.8043% ( 11) 00:07:43.350 9729.575 - 9779.988: 97.8605% ( 11) 00:07:43.350 9779.988 - 9830.400: 97.9320% ( 14) 00:07:43.350 9830.400 - 9880.812: 98.0035% ( 14) 00:07:43.350 9880.812 - 9931.225: 98.0647% ( 12) 00:07:43.350 9931.225 - 9981.637: 98.1464% ( 16) 00:07:43.350 9981.637 - 10032.049: 98.2281% ( 16) 00:07:43.350 10032.049 - 10082.462: 98.3150% ( 17) 00:07:43.350 10082.462 - 10132.874: 98.4018% ( 17) 00:07:43.350 10132.874 - 10183.286: 98.4783% ( 15) 00:07:43.350 10183.286 - 10233.698: 98.5447% ( 13) 00:07:43.350 10233.698 - 10284.111: 98.5958% ( 10) 00:07:43.350 10284.111 - 10334.523: 98.6366% ( 8) 00:07:43.350 10334.523 - 10384.935: 98.6673% ( 6) 00:07:43.350 10384.935 - 10435.348: 98.6979% ( 6) 00:07:43.350 10435.348 - 10485.760: 98.7337% ( 7) 00:07:43.350 10485.760 - 10536.172: 98.7643% ( 6) 00:07:43.350 10536.172 - 10586.585: 98.7949% ( 6) 00:07:43.350 10586.585 - 10636.997: 98.8256% ( 6) 00:07:43.350 10636.997 - 10687.409: 98.8613% ( 7) 00:07:43.350 10687.409 - 10737.822: 98.8920% ( 6) 00:07:43.350 10737.822 - 10788.234: 98.9226% ( 6) 00:07:43.350 10788.234 - 10838.646: 98.9532% ( 6) 00:07:43.350 10838.646 - 10889.058: 98.9737% ( 4) 00:07:43.350 10889.058 - 10939.471: 98.9890% ( 3) 00:07:43.350 10939.471 - 10989.883: 99.0043% ( 3) 00:07:43.350 10989.883 - 11040.295: 99.0145% ( 2) 00:07:43.350 11040.295 - 11090.708: 99.0196% ( 1) 00:07:43.350 12451.840 - 12502.252: 99.0298% ( 2) 00:07:43.350 12502.252 - 12552.665: 99.0400% ( 2) 00:07:43.350 12552.665 - 12603.077: 99.0605% ( 4) 00:07:43.350 12603.077 - 12653.489: 99.0809% ( 4) 00:07:43.350 12653.489 - 12703.902: 99.1064% ( 5) 00:07:43.350 12703.902 - 12754.314: 99.1217% ( 3) 00:07:43.350 12754.314 - 12804.726: 99.1422% ( 4) 00:07:43.350 12804.726 - 12855.138: 99.1626% ( 4) 00:07:43.350 12855.138 - 12905.551: 99.1779% ( 3) 00:07:43.350 12905.551 - 13006.375: 99.2188% ( 8) 00:07:43.350 13006.375 - 13107.200: 99.2545% ( 7) 00:07:43.350 13107.200 - 13208.025: 99.2953% ( 8) 00:07:43.350 13208.025 - 13308.849: 99.3311% ( 7) 00:07:43.350 13308.849 - 13409.674: 99.3464% ( 3) 00:07:43.350 13712.148 - 13812.972: 99.3566% ( 2) 00:07:43.350 13812.972 - 13913.797: 99.3821% ( 5) 00:07:43.350 13913.797 - 14014.622: 99.4026% ( 4) 00:07:43.350 14014.622 - 14115.446: 99.4281% ( 5) 00:07:43.350 14115.446 - 14216.271: 99.4485% ( 4) 00:07:43.350 14216.271 - 14317.095: 99.4690% ( 4) 00:07:43.350 14317.095 - 14417.920: 99.4894% ( 4) 00:07:43.350 14417.920 - 14518.745: 99.5098% ( 4) 00:07:43.350 14518.745 - 14619.569: 99.5302% ( 4) 00:07:43.350 14619.569 - 14720.394: 99.5558% ( 5) 00:07:43.350 14720.394 - 14821.218: 99.5762% ( 4) 00:07:43.350 14821.218 - 14922.043: 99.5966% ( 4) 00:07:43.350 14922.043 - 15022.868: 99.6170% ( 4) 00:07:43.350 15022.868 - 15123.692: 99.6375% ( 4) 00:07:43.350 15123.692 - 15224.517: 99.6579% ( 4) 00:07:43.350 15224.517 - 15325.342: 99.6732% ( 3) 00:07:43.350 20568.222 - 20669.046: 99.6783% ( 1) 00:07:43.350 20669.046 - 20769.871: 99.6936% ( 3) 00:07:43.350 20769.871 - 20870.695: 99.7141% ( 4) 00:07:43.350 20870.695 - 20971.520: 99.7294% ( 3) 00:07:43.350 20971.520 - 21072.345: 99.7447% ( 3) 00:07:43.350 21072.345 - 21173.169: 99.7600% ( 3) 00:07:43.350 21173.169 - 21273.994: 99.7753% ( 3) 00:07:43.350 21273.994 - 21374.818: 99.7958% ( 4) 00:07:43.350 21374.818 - 21475.643: 99.8111% ( 3) 00:07:43.350 21475.643 - 21576.468: 99.8264% ( 3) 00:07:43.350 21576.468 - 21677.292: 99.8468% ( 4) 00:07:43.350 21677.292 - 21778.117: 99.8621% ( 3) 00:07:43.350 21778.117 - 21878.942: 99.8826% ( 4) 00:07:43.350 21878.942 - 21979.766: 99.8979% ( 3) 00:07:43.350 21979.766 - 22080.591: 99.9132% ( 3) 00:07:43.350 22080.591 - 22181.415: 99.9285% ( 3) 00:07:43.350 22181.415 - 22282.240: 99.9438% ( 3) 00:07:43.350 22282.240 - 22383.065: 99.9540% ( 2) 00:07:43.350 22383.065 - 22483.889: 99.9694% ( 3) 00:07:43.350 22483.889 - 22584.714: 99.9796% ( 2) 00:07:43.350 22584.714 - 22685.538: 100.0000% ( 4) 00:07:43.350 00:07:43.350 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:43.350 ============================================================================== 00:07:43.350 Range in us Cumulative IO count 00:07:43.350 5721.797 - 5747.003: 0.0613% ( 12) 00:07:43.350 5747.003 - 5772.209: 0.1838% ( 24) 00:07:43.350 5772.209 - 5797.415: 0.4647% ( 55) 00:07:43.350 5797.415 - 5822.622: 0.9191% ( 89) 00:07:43.351 5822.622 - 5847.828: 1.7872% ( 170) 00:07:43.351 5847.828 - 5873.034: 3.1914% ( 275) 00:07:43.351 5873.034 - 5898.240: 4.6722% ( 290) 00:07:43.351 5898.240 - 5923.446: 6.4083% ( 340) 00:07:43.351 5923.446 - 5948.652: 8.6652% ( 442) 00:07:43.351 5948.652 - 5973.858: 11.2337% ( 503) 00:07:43.351 5973.858 - 5999.065: 13.8736% ( 517) 00:07:43.351 5999.065 - 6024.271: 16.5237% ( 519) 00:07:43.351 6024.271 - 6049.477: 19.0819% ( 501) 00:07:43.351 6049.477 - 6074.683: 21.7473% ( 522) 00:07:43.351 6074.683 - 6099.889: 24.3873% ( 517) 00:07:43.351 6099.889 - 6125.095: 27.1344% ( 538) 00:07:43.351 6125.095 - 6150.302: 29.9224% ( 546) 00:07:43.351 6150.302 - 6175.508: 32.7155% ( 547) 00:07:43.351 6175.508 - 6200.714: 35.4473% ( 535) 00:07:43.351 6200.714 - 6225.920: 38.2812% ( 555) 00:07:43.351 6225.920 - 6251.126: 41.0233% ( 537) 00:07:43.351 6251.126 - 6276.332: 43.6683% ( 518) 00:07:43.351 6276.332 - 6301.538: 46.4512% ( 545) 00:07:43.351 6301.538 - 6326.745: 49.1881% ( 536) 00:07:43.351 6326.745 - 6351.951: 51.9965% ( 550) 00:07:43.351 6351.951 - 6377.157: 54.7743% ( 544) 00:07:43.351 6377.157 - 6402.363: 57.5061% ( 535) 00:07:43.351 6402.363 - 6427.569: 60.2839% ( 544) 00:07:43.351 6427.569 - 6452.775: 63.0770% ( 547) 00:07:43.351 6452.775 - 6503.188: 68.5713% ( 1076) 00:07:43.351 6503.188 - 6553.600: 74.1626% ( 1095) 00:07:43.351 6553.600 - 6604.012: 79.5190% ( 1049) 00:07:43.351 6604.012 - 6654.425: 84.4822% ( 972) 00:07:43.351 6654.425 - 6704.837: 88.3578% ( 759) 00:07:43.351 6704.837 - 6755.249: 90.8905% ( 496) 00:07:43.351 6755.249 - 6805.662: 92.2232% ( 261) 00:07:43.351 6805.662 - 6856.074: 92.9483% ( 142) 00:07:43.351 6856.074 - 6906.486: 93.4283% ( 94) 00:07:43.351 6906.486 - 6956.898: 93.7653% ( 66) 00:07:43.351 6956.898 - 7007.311: 94.0359% ( 53) 00:07:43.351 7007.311 - 7057.723: 94.2708% ( 46) 00:07:43.351 7057.723 - 7108.135: 94.4496% ( 35) 00:07:43.351 7108.135 - 7158.548: 94.6181% ( 33) 00:07:43.351 7158.548 - 7208.960: 94.7508% ( 26) 00:07:43.351 7208.960 - 7259.372: 94.8887% ( 27) 00:07:43.351 7259.372 - 7309.785: 95.0266% ( 27) 00:07:43.351 7309.785 - 7360.197: 95.1338% ( 21) 00:07:43.351 7360.197 - 7410.609: 95.2206% ( 17) 00:07:43.351 7410.609 - 7461.022: 95.3125% ( 18) 00:07:43.351 7461.022 - 7511.434: 95.3891% ( 15) 00:07:43.351 7511.434 - 7561.846: 95.4555% ( 13) 00:07:43.351 7561.846 - 7612.258: 95.5270% ( 14) 00:07:43.351 7612.258 - 7662.671: 95.5831% ( 11) 00:07:43.351 7662.671 - 7713.083: 95.6597% ( 15) 00:07:43.351 7713.083 - 7763.495: 95.7312% ( 14) 00:07:43.351 7763.495 - 7813.908: 95.7925% ( 12) 00:07:43.351 7813.908 - 7864.320: 95.8435% ( 10) 00:07:43.351 7864.320 - 7914.732: 95.8946% ( 10) 00:07:43.351 7914.732 - 7965.145: 95.9712% ( 15) 00:07:43.351 7965.145 - 8015.557: 96.0325% ( 12) 00:07:43.351 8015.557 - 8065.969: 96.0835% ( 10) 00:07:43.351 8065.969 - 8116.382: 96.1193% ( 7) 00:07:43.351 8116.382 - 8166.794: 96.1550% ( 7) 00:07:43.351 8166.794 - 8217.206: 96.2469% ( 18) 00:07:43.351 8217.206 - 8267.618: 96.3031% ( 11) 00:07:43.351 8267.618 - 8318.031: 96.3440% ( 8) 00:07:43.351 8318.031 - 8368.443: 96.4103% ( 13) 00:07:43.351 8368.443 - 8418.855: 96.4563% ( 9) 00:07:43.351 8418.855 - 8469.268: 96.5022% ( 9) 00:07:43.351 8469.268 - 8519.680: 96.5635% ( 12) 00:07:43.351 8519.680 - 8570.092: 96.6197% ( 11) 00:07:43.351 8570.092 - 8620.505: 96.6708% ( 10) 00:07:43.351 8620.505 - 8670.917: 96.7269% ( 11) 00:07:43.351 8670.917 - 8721.329: 96.7933% ( 13) 00:07:43.351 8721.329 - 8771.742: 96.8699% ( 15) 00:07:43.351 8771.742 - 8822.154: 96.9107% ( 8) 00:07:43.351 8822.154 - 8872.566: 96.9822% ( 14) 00:07:43.351 8872.566 - 8922.978: 97.0384% ( 11) 00:07:43.351 8922.978 - 8973.391: 97.0997% ( 12) 00:07:43.351 8973.391 - 9023.803: 97.1507% ( 10) 00:07:43.351 9023.803 - 9074.215: 97.1865% ( 7) 00:07:43.351 9074.215 - 9124.628: 97.2426% ( 11) 00:07:43.351 9124.628 - 9175.040: 97.3039% ( 12) 00:07:43.351 9175.040 - 9225.452: 97.3703% ( 13) 00:07:43.351 9225.452 - 9275.865: 97.4418% ( 14) 00:07:43.351 9275.865 - 9326.277: 97.5184% ( 15) 00:07:43.351 9326.277 - 9376.689: 97.5899% ( 14) 00:07:43.351 9376.689 - 9427.102: 97.6869% ( 19) 00:07:43.351 9427.102 - 9477.514: 97.7788% ( 18) 00:07:43.351 9477.514 - 9527.926: 97.8605% ( 16) 00:07:43.351 9527.926 - 9578.338: 97.9524% ( 18) 00:07:43.351 9578.338 - 9628.751: 98.0596% ( 21) 00:07:43.351 9628.751 - 9679.163: 98.1618% ( 20) 00:07:43.351 9679.163 - 9729.575: 98.2690% ( 21) 00:07:43.351 9729.575 - 9779.988: 98.3660% ( 19) 00:07:43.351 9779.988 - 9830.400: 98.4477% ( 16) 00:07:43.351 9830.400 - 9880.812: 98.5294% ( 16) 00:07:43.351 9880.812 - 9931.225: 98.6060% ( 15) 00:07:43.351 9931.225 - 9981.637: 98.6826% ( 15) 00:07:43.351 9981.637 - 10032.049: 98.7388% ( 11) 00:07:43.351 10032.049 - 10082.462: 98.8000% ( 12) 00:07:43.351 10082.462 - 10132.874: 98.8562% ( 11) 00:07:43.351 10132.874 - 10183.286: 98.8920% ( 7) 00:07:43.351 10183.286 - 10233.698: 98.9277% ( 7) 00:07:43.351 10233.698 - 10284.111: 98.9481% ( 4) 00:07:43.351 10284.111 - 10334.523: 98.9685% ( 4) 00:07:43.351 10334.523 - 10384.935: 98.9890% ( 4) 00:07:43.351 10384.935 - 10435.348: 99.0094% ( 4) 00:07:43.351 10435.348 - 10485.760: 99.0196% ( 2) 00:07:43.351 12048.542 - 12098.954: 99.0298% ( 2) 00:07:43.351 12098.954 - 12149.366: 99.0451% ( 3) 00:07:43.351 12149.366 - 12199.778: 99.0502% ( 1) 00:07:43.351 12199.778 - 12250.191: 99.0554% ( 1) 00:07:43.351 12250.191 - 12300.603: 99.0758% ( 4) 00:07:43.351 12300.603 - 12351.015: 99.0962% ( 4) 00:07:43.351 12351.015 - 12401.428: 99.1166% ( 4) 00:07:43.351 12401.428 - 12451.840: 99.1422% ( 5) 00:07:43.351 12451.840 - 12502.252: 99.1677% ( 5) 00:07:43.351 12502.252 - 12552.665: 99.1881% ( 4) 00:07:43.351 12552.665 - 12603.077: 99.2085% ( 4) 00:07:43.351 12603.077 - 12653.489: 99.2341% ( 5) 00:07:43.351 12653.489 - 12703.902: 99.2596% ( 5) 00:07:43.351 12703.902 - 12754.314: 99.2800% ( 4) 00:07:43.351 12754.314 - 12804.726: 99.3004% ( 4) 00:07:43.351 12804.726 - 12855.138: 99.3209% ( 4) 00:07:43.351 12855.138 - 12905.551: 99.3413% ( 4) 00:07:43.351 12905.551 - 13006.375: 99.3873% ( 9) 00:07:43.351 13006.375 - 13107.200: 99.4230% ( 7) 00:07:43.351 13107.200 - 13208.025: 99.4638% ( 8) 00:07:43.351 13208.025 - 13308.849: 99.5149% ( 10) 00:07:43.351 13308.849 - 13409.674: 99.5558% ( 8) 00:07:43.351 13409.674 - 13510.498: 99.5966% ( 8) 00:07:43.351 13510.498 - 13611.323: 99.6119% ( 3) 00:07:43.351 13611.323 - 13712.148: 99.6324% ( 4) 00:07:43.351 13712.148 - 13812.972: 99.6477% ( 3) 00:07:43.351 13812.972 - 13913.797: 99.6681% ( 4) 00:07:43.351 13913.797 - 14014.622: 99.6732% ( 1) 00:07:43.351 19963.274 - 20064.098: 99.6834% ( 2) 00:07:43.351 20064.098 - 20164.923: 99.6987% ( 3) 00:07:43.351 20164.923 - 20265.748: 99.7192% ( 4) 00:07:43.351 20265.748 - 20366.572: 99.7345% ( 3) 00:07:43.351 20366.572 - 20467.397: 99.7549% ( 4) 00:07:43.351 20467.397 - 20568.222: 99.7702% ( 3) 00:07:43.351 20568.222 - 20669.046: 99.7906% ( 4) 00:07:43.351 20669.046 - 20769.871: 99.8060% ( 3) 00:07:43.351 20769.871 - 20870.695: 99.8213% ( 3) 00:07:43.351 20870.695 - 20971.520: 99.8366% ( 3) 00:07:43.351 20971.520 - 21072.345: 99.8519% ( 3) 00:07:43.351 21072.345 - 21173.169: 99.8723% ( 4) 00:07:43.351 21173.169 - 21273.994: 99.8877% ( 3) 00:07:43.351 21273.994 - 21374.818: 99.9030% ( 3) 00:07:43.351 21374.818 - 21475.643: 99.9234% ( 4) 00:07:43.351 21475.643 - 21576.468: 99.9387% ( 3) 00:07:43.351 21576.468 - 21677.292: 99.9592% ( 4) 00:07:43.351 21677.292 - 21778.117: 99.9745% ( 3) 00:07:43.351 21778.117 - 21878.942: 99.9898% ( 3) 00:07:43.351 21878.942 - 21979.766: 100.0000% ( 2) 00:07:43.351 00:07:43.351 23:51:49 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:44.728 Initializing NVMe Controllers 00:07:44.728 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:44.728 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:44.728 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:44.728 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:44.728 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:44.728 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:44.728 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:44.728 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:44.728 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:44.728 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:44.728 Initialization complete. Launching workers. 00:07:44.728 ======================================================== 00:07:44.728 Latency(us) 00:07:44.728 Device Information : IOPS MiB/s Average min max 00:07:44.728 PCIE (0000:00:10.0) NSID 1 from core 0: 17097.15 200.36 7496.95 5885.35 34361.73 00:07:44.728 PCIE (0000:00:11.0) NSID 1 from core 0: 17097.15 200.36 7485.57 5890.04 32935.56 00:07:44.728 PCIE (0000:00:13.0) NSID 1 from core 0: 17097.15 200.36 7473.82 5991.53 31770.71 00:07:44.728 PCIE (0000:00:12.0) NSID 1 from core 0: 17097.15 200.36 7462.20 5970.62 29777.32 00:07:44.728 PCIE (0000:00:12.0) NSID 2 from core 0: 17097.15 200.36 7450.56 5877.17 28346.91 00:07:44.728 PCIE (0000:00:12.0) NSID 3 from core 0: 17097.15 200.36 7438.98 5917.63 26036.70 00:07:44.728 ======================================================== 00:07:44.728 Total : 102582.91 1202.14 7468.01 5877.17 34361.73 00:07:44.728 00:07:44.728 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:44.728 ================================================================================= 00:07:44.728 1.00000% : 6351.951us 00:07:44.728 10.00000% : 6654.425us 00:07:44.728 25.00000% : 6805.662us 00:07:44.728 50.00000% : 7108.135us 00:07:44.728 75.00000% : 7561.846us 00:07:44.728 90.00000% : 8570.092us 00:07:44.728 95.00000% : 9376.689us 00:07:44.728 98.00000% : 10284.111us 00:07:44.728 99.00000% : 11494.006us 00:07:44.728 99.50000% : 26416.049us 00:07:44.728 99.90000% : 33877.071us 00:07:44.728 99.99000% : 34482.018us 00:07:44.728 99.99900% : 34482.018us 00:07:44.728 99.99990% : 34482.018us 00:07:44.728 99.99999% : 34482.018us 00:07:44.728 00:07:44.728 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:44.728 ================================================================================= 00:07:44.728 1.00000% : 6427.569us 00:07:44.728 10.00000% : 6704.837us 00:07:44.728 25.00000% : 6856.074us 00:07:44.728 50.00000% : 7057.723us 00:07:44.728 75.00000% : 7511.434us 00:07:44.728 90.00000% : 8620.505us 00:07:44.728 95.00000% : 9175.040us 00:07:44.728 98.00000% : 10233.698us 00:07:44.728 99.00000% : 11695.655us 00:07:44.728 99.50000% : 24903.680us 00:07:44.728 99.90000% : 32667.175us 00:07:44.728 99.99000% : 33070.474us 00:07:44.728 99.99900% : 33070.474us 00:07:44.728 99.99990% : 33070.474us 00:07:44.728 99.99999% : 33070.474us 00:07:44.728 00:07:44.728 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:44.728 ================================================================================= 00:07:44.728 1.00000% : 6427.569us 00:07:44.728 10.00000% : 6704.837us 00:07:44.728 25.00000% : 6856.074us 00:07:44.728 50.00000% : 7057.723us 00:07:44.728 75.00000% : 7511.434us 00:07:44.728 90.00000% : 8570.092us 00:07:44.728 95.00000% : 9124.628us 00:07:44.728 98.00000% : 10384.935us 00:07:44.728 99.00000% : 11191.532us 00:07:44.728 99.50000% : 23996.258us 00:07:44.728 99.90000% : 31457.280us 00:07:44.728 99.99000% : 31860.578us 00:07:44.728 99.99900% : 31860.578us 00:07:44.728 99.99990% : 31860.578us 00:07:44.728 99.99999% : 31860.578us 00:07:44.728 00:07:44.728 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:44.728 ================================================================================= 00:07:44.728 1.00000% : 6427.569us 00:07:44.728 10.00000% : 6755.249us 00:07:44.728 25.00000% : 6856.074us 00:07:44.728 50.00000% : 7057.723us 00:07:44.728 75.00000% : 7511.434us 00:07:44.728 90.00000% : 8570.092us 00:07:44.728 95.00000% : 9074.215us 00:07:44.728 98.00000% : 10384.935us 00:07:44.728 99.00000% : 10989.883us 00:07:44.728 99.50000% : 23290.486us 00:07:44.728 99.90000% : 29440.788us 00:07:44.728 99.99000% : 29844.086us 00:07:44.728 99.99900% : 29844.086us 00:07:44.728 99.99990% : 29844.086us 00:07:44.728 99.99999% : 29844.086us 00:07:44.728 00:07:44.728 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:44.728 ================================================================================= 00:07:44.728 1.00000% : 6427.569us 00:07:44.728 10.00000% : 6755.249us 00:07:44.728 25.00000% : 6856.074us 00:07:44.728 50.00000% : 7057.723us 00:07:44.728 75.00000% : 7511.434us 00:07:44.728 90.00000% : 8519.680us 00:07:44.728 95.00000% : 9124.628us 00:07:44.728 98.00000% : 10334.523us 00:07:44.728 99.00000% : 10838.646us 00:07:44.728 99.50000% : 21979.766us 00:07:44.728 99.90000% : 27827.594us 00:07:44.728 99.99000% : 28432.542us 00:07:44.728 99.99900% : 28432.542us 00:07:44.728 99.99990% : 28432.542us 00:07:44.728 99.99999% : 28432.542us 00:07:44.728 00:07:44.728 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:44.728 ================================================================================= 00:07:44.728 1.00000% : 6402.363us 00:07:44.728 10.00000% : 6755.249us 00:07:44.728 25.00000% : 6906.486us 00:07:44.728 50.00000% : 7057.723us 00:07:44.728 75.00000% : 7511.434us 00:07:44.728 90.00000% : 8469.268us 00:07:44.728 95.00000% : 9225.452us 00:07:44.728 98.00000% : 10284.111us 00:07:44.728 99.00000% : 10737.822us 00:07:44.728 99.50000% : 20669.046us 00:07:44.728 99.90000% : 25508.628us 00:07:44.728 99.99000% : 26012.751us 00:07:44.728 99.99900% : 26214.400us 00:07:44.728 99.99990% : 26214.400us 00:07:44.728 99.99999% : 26214.400us 00:07:44.728 00:07:44.728 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:44.728 ============================================================================== 00:07:44.728 Range in us Cumulative IO count 00:07:44.728 5873.034 - 5898.240: 0.0058% ( 1) 00:07:44.728 5898.240 - 5923.446: 0.0175% ( 2) 00:07:44.728 5923.446 - 5948.652: 0.0350% ( 3) 00:07:44.728 5948.652 - 5973.858: 0.0408% ( 1) 00:07:44.728 5973.858 - 5999.065: 0.0641% ( 4) 00:07:44.728 5999.065 - 6024.271: 0.1049% ( 7) 00:07:44.728 6024.271 - 6049.477: 0.1516% ( 8) 00:07:44.728 6049.477 - 6074.683: 0.1691% ( 3) 00:07:44.728 6074.683 - 6099.889: 0.1982% ( 5) 00:07:44.728 6099.889 - 6125.095: 0.2565% ( 10) 00:07:44.728 6125.095 - 6150.302: 0.2915% ( 6) 00:07:44.729 6150.302 - 6175.508: 0.3265% ( 6) 00:07:44.729 6175.508 - 6200.714: 0.3440% ( 3) 00:07:44.729 6200.714 - 6225.920: 0.4548% ( 19) 00:07:44.729 6225.920 - 6251.126: 0.5480% ( 16) 00:07:44.729 6251.126 - 6276.332: 0.6180% ( 12) 00:07:44.729 6276.332 - 6301.538: 0.7404% ( 21) 00:07:44.729 6301.538 - 6326.745: 0.8396% ( 17) 00:07:44.729 6326.745 - 6351.951: 1.0611% ( 38) 00:07:44.729 6351.951 - 6377.157: 1.3060% ( 42) 00:07:44.729 6377.157 - 6402.363: 1.5975% ( 50) 00:07:44.729 6402.363 - 6427.569: 2.0289% ( 74) 00:07:44.729 6427.569 - 6452.775: 2.6586% ( 108) 00:07:44.729 6452.775 - 6503.188: 4.1336% ( 253) 00:07:44.729 6503.188 - 6553.600: 6.4715% ( 401) 00:07:44.729 6553.600 - 6604.012: 8.9960% ( 433) 00:07:44.729 6604.012 - 6654.425: 12.2318% ( 555) 00:07:44.729 6654.425 - 6704.837: 16.5986% ( 749) 00:07:44.729 6704.837 - 6755.249: 21.4727% ( 836) 00:07:44.729 6755.249 - 6805.662: 26.8015% ( 914) 00:07:44.729 6805.662 - 6856.074: 32.0954% ( 908) 00:07:44.729 6856.074 - 6906.486: 37.1735% ( 871) 00:07:44.729 6906.486 - 6956.898: 41.2722% ( 703) 00:07:44.729 6956.898 - 7007.311: 45.0385% ( 646) 00:07:44.729 7007.311 - 7057.723: 48.7757% ( 641) 00:07:44.729 7057.723 - 7108.135: 52.2621% ( 598) 00:07:44.729 7108.135 - 7158.548: 55.4221% ( 542) 00:07:44.729 7158.548 - 7208.960: 58.3431% ( 501) 00:07:44.729 7208.960 - 7259.372: 61.1824% ( 487) 00:07:44.729 7259.372 - 7309.785: 63.8759% ( 462) 00:07:44.729 7309.785 - 7360.197: 66.6628% ( 478) 00:07:44.729 7360.197 - 7410.609: 69.3505% ( 461) 00:07:44.729 7410.609 - 7461.022: 71.7292% ( 408) 00:07:44.729 7461.022 - 7511.434: 73.6765% ( 334) 00:07:44.729 7511.434 - 7561.846: 75.1866% ( 259) 00:07:44.729 7561.846 - 7612.258: 76.3934% ( 207) 00:07:44.729 7612.258 - 7662.671: 77.4312% ( 178) 00:07:44.729 7662.671 - 7713.083: 78.4865% ( 181) 00:07:44.729 7713.083 - 7763.495: 79.4135% ( 159) 00:07:44.729 7763.495 - 7813.908: 80.4746% ( 182) 00:07:44.729 7813.908 - 7864.320: 81.3549% ( 151) 00:07:44.729 7864.320 - 7914.732: 82.2645% ( 156) 00:07:44.729 7914.732 - 7965.145: 83.2614% ( 171) 00:07:44.729 7965.145 - 8015.557: 83.9086% ( 111) 00:07:44.729 8015.557 - 8065.969: 84.4508% ( 93) 00:07:44.729 8065.969 - 8116.382: 85.0280% ( 99) 00:07:44.729 8116.382 - 8166.794: 85.7917% ( 131) 00:07:44.729 8166.794 - 8217.206: 86.7596% ( 166) 00:07:44.729 8217.206 - 8267.618: 87.4184% ( 113) 00:07:44.729 8267.618 - 8318.031: 87.8731% ( 78) 00:07:44.729 8318.031 - 8368.443: 88.2929% ( 72) 00:07:44.729 8368.443 - 8418.855: 88.7768% ( 83) 00:07:44.729 8418.855 - 8469.268: 89.3015% ( 90) 00:07:44.729 8469.268 - 8519.680: 89.8846% ( 100) 00:07:44.729 8519.680 - 8570.092: 90.3801% ( 85) 00:07:44.729 8570.092 - 8620.505: 90.9398% ( 96) 00:07:44.729 8620.505 - 8670.917: 91.3946% ( 78) 00:07:44.729 8670.917 - 8721.329: 91.8785% ( 83) 00:07:44.729 8721.329 - 8771.742: 92.2516% ( 64) 00:07:44.729 8771.742 - 8822.154: 92.5606% ( 53) 00:07:44.729 8822.154 - 8872.566: 92.9338% ( 64) 00:07:44.729 8872.566 - 8922.978: 93.2544% ( 55) 00:07:44.729 8922.978 - 8973.391: 93.4993% ( 42) 00:07:44.729 8973.391 - 9023.803: 93.7733% ( 47) 00:07:44.729 9023.803 - 9074.215: 94.0124% ( 41) 00:07:44.729 9074.215 - 9124.628: 94.1814% ( 29) 00:07:44.729 9124.628 - 9175.040: 94.3330% ( 26) 00:07:44.729 9175.040 - 9225.452: 94.5371% ( 35) 00:07:44.729 9225.452 - 9275.865: 94.7761% ( 41) 00:07:44.729 9275.865 - 9326.277: 94.9569% ( 31) 00:07:44.729 9326.277 - 9376.689: 95.1551% ( 34) 00:07:44.729 9376.689 - 9427.102: 95.3125% ( 27) 00:07:44.729 9427.102 - 9477.514: 95.6215% ( 53) 00:07:44.729 9477.514 - 9527.926: 95.8314% ( 36) 00:07:44.729 9527.926 - 9578.338: 96.0180% ( 32) 00:07:44.729 9578.338 - 9628.751: 96.1579% ( 24) 00:07:44.729 9628.751 - 9679.163: 96.3211% ( 28) 00:07:44.729 9679.163 - 9729.575: 96.4844% ( 28) 00:07:44.729 9729.575 - 9779.988: 96.5893% ( 18) 00:07:44.729 9779.988 - 9830.400: 96.7526% ( 28) 00:07:44.729 9830.400 - 9880.812: 96.8167% ( 11) 00:07:44.729 9880.812 - 9931.225: 96.9741% ( 27) 00:07:44.729 9931.225 - 9981.637: 97.1490% ( 30) 00:07:44.729 9981.637 - 10032.049: 97.2715% ( 21) 00:07:44.729 10032.049 - 10082.462: 97.3647% ( 16) 00:07:44.729 10082.462 - 10132.874: 97.4930% ( 22) 00:07:44.729 10132.874 - 10183.286: 97.7146% ( 38) 00:07:44.729 10183.286 - 10233.698: 97.8895% ( 30) 00:07:44.729 10233.698 - 10284.111: 98.0119% ( 21) 00:07:44.729 10284.111 - 10334.523: 98.1227% ( 19) 00:07:44.729 10334.523 - 10384.935: 98.1985% ( 13) 00:07:44.729 10384.935 - 10435.348: 98.2684% ( 12) 00:07:44.729 10435.348 - 10485.760: 98.3267% ( 10) 00:07:44.729 10485.760 - 10536.172: 98.3967% ( 12) 00:07:44.729 10536.172 - 10586.585: 98.4492% ( 9) 00:07:44.729 10586.585 - 10636.997: 98.5366% ( 15) 00:07:44.729 10636.997 - 10687.409: 98.6066% ( 12) 00:07:44.729 10687.409 - 10737.822: 98.6590% ( 9) 00:07:44.729 10737.822 - 10788.234: 98.6999% ( 7) 00:07:44.729 10788.234 - 10838.646: 98.7174% ( 3) 00:07:44.729 10838.646 - 10889.058: 98.7465% ( 5) 00:07:44.729 10889.058 - 10939.471: 98.7582% ( 2) 00:07:44.729 10939.471 - 10989.883: 98.7815% ( 4) 00:07:44.729 10989.883 - 11040.295: 98.8106% ( 5) 00:07:44.729 11040.295 - 11090.708: 98.8340% ( 4) 00:07:44.729 11090.708 - 11141.120: 98.8573% ( 4) 00:07:44.729 11141.120 - 11191.532: 98.8689% ( 2) 00:07:44.729 11191.532 - 11241.945: 98.8806% ( 2) 00:07:44.729 11241.945 - 11292.357: 98.8981% ( 3) 00:07:44.729 11292.357 - 11342.769: 98.9156% ( 3) 00:07:44.729 11342.769 - 11393.182: 98.9389% ( 4) 00:07:44.729 11393.182 - 11443.594: 98.9622% ( 4) 00:07:44.729 11443.594 - 11494.006: 99.0089% ( 8) 00:07:44.729 11494.006 - 11544.418: 99.0672% ( 10) 00:07:44.729 11544.418 - 11594.831: 99.1021% ( 6) 00:07:44.729 11594.831 - 11645.243: 99.1138% ( 2) 00:07:44.729 11645.243 - 11695.655: 99.1196% ( 1) 00:07:44.729 11695.655 - 11746.068: 99.1313% ( 2) 00:07:44.729 11746.068 - 11796.480: 99.1371% ( 1) 00:07:44.729 11796.480 - 11846.892: 99.1488% ( 2) 00:07:44.729 11846.892 - 11897.305: 99.1546% ( 1) 00:07:44.729 11897.305 - 11947.717: 99.1663% ( 2) 00:07:44.729 11947.717 - 11998.129: 99.1721% ( 1) 00:07:44.729 11998.129 - 12048.542: 99.1779% ( 1) 00:07:44.729 12048.542 - 12098.954: 99.1896% ( 2) 00:07:44.729 12098.954 - 12149.366: 99.1954% ( 1) 00:07:44.729 12149.366 - 12199.778: 99.2013% ( 1) 00:07:44.729 12199.778 - 12250.191: 99.2071% ( 1) 00:07:44.729 12250.191 - 12300.603: 99.2129% ( 1) 00:07:44.729 12300.603 - 12351.015: 99.2246% ( 2) 00:07:44.729 12351.015 - 12401.428: 99.2362% ( 2) 00:07:44.729 12401.428 - 12451.840: 99.2479% ( 2) 00:07:44.729 12451.840 - 12502.252: 99.2537% ( 1) 00:07:44.729 24802.855 - 24903.680: 99.2596% ( 1) 00:07:44.729 24903.680 - 25004.505: 99.2771% ( 3) 00:07:44.729 25004.505 - 25105.329: 99.2945% ( 3) 00:07:44.729 25105.329 - 25206.154: 99.3120% ( 3) 00:07:44.729 25206.154 - 25306.978: 99.3354% ( 4) 00:07:44.729 25306.978 - 25407.803: 99.3528% ( 3) 00:07:44.729 25407.803 - 25508.628: 99.3703% ( 3) 00:07:44.729 25508.628 - 25609.452: 99.3878% ( 3) 00:07:44.729 25609.452 - 25710.277: 99.3995% ( 2) 00:07:44.729 25710.277 - 25811.102: 99.4170% ( 3) 00:07:44.729 25811.102 - 26012.751: 99.4520% ( 6) 00:07:44.729 26012.751 - 26214.400: 99.4928% ( 7) 00:07:44.729 26214.400 - 26416.049: 99.5278% ( 6) 00:07:44.729 26416.049 - 26617.698: 99.5627% ( 6) 00:07:44.729 26617.698 - 26819.348: 99.6035% ( 7) 00:07:44.729 26819.348 - 27020.997: 99.6269% ( 4) 00:07:44.729 31860.578 - 32062.228: 99.6327% ( 1) 00:07:44.729 32062.228 - 32263.877: 99.6618% ( 5) 00:07:44.729 32263.877 - 32465.526: 99.6968% ( 6) 00:07:44.729 32465.526 - 32667.175: 99.7201% ( 4) 00:07:44.729 32667.175 - 32868.825: 99.7493% ( 5) 00:07:44.729 32868.825 - 33070.474: 99.7901% ( 7) 00:07:44.729 33070.474 - 33272.123: 99.8193% ( 5) 00:07:44.729 33272.123 - 33473.772: 99.8542% ( 6) 00:07:44.729 33473.772 - 33675.422: 99.8892% ( 6) 00:07:44.729 33675.422 - 33877.071: 99.9184% ( 5) 00:07:44.729 33877.071 - 34078.720: 99.9534% ( 6) 00:07:44.729 34078.720 - 34280.369: 99.9883% ( 6) 00:07:44.729 34280.369 - 34482.018: 100.0000% ( 2) 00:07:44.729 00:07:44.729 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:44.729 ============================================================================== 00:07:44.729 Range in us Cumulative IO count 00:07:44.729 5873.034 - 5898.240: 0.0058% ( 1) 00:07:44.729 5923.446 - 5948.652: 0.0117% ( 1) 00:07:44.729 6049.477 - 6074.683: 0.0175% ( 1) 00:07:44.729 6074.683 - 6099.889: 0.0350% ( 3) 00:07:44.729 6099.889 - 6125.095: 0.0583% ( 4) 00:07:44.729 6125.095 - 6150.302: 0.0816% ( 4) 00:07:44.729 6150.302 - 6175.508: 0.1224% ( 7) 00:07:44.729 6175.508 - 6200.714: 0.1632% ( 7) 00:07:44.729 6200.714 - 6225.920: 0.2099% ( 8) 00:07:44.729 6225.920 - 6251.126: 0.3032% ( 16) 00:07:44.729 6251.126 - 6276.332: 0.3848% ( 14) 00:07:44.729 6276.332 - 6301.538: 0.4606% ( 13) 00:07:44.729 6301.538 - 6326.745: 0.5247% ( 11) 00:07:44.729 6326.745 - 6351.951: 0.6646% ( 24) 00:07:44.729 6351.951 - 6377.157: 0.8104% ( 25) 00:07:44.729 6377.157 - 6402.363: 0.9328% ( 21) 00:07:44.729 6402.363 - 6427.569: 1.0669% ( 23) 00:07:44.729 6427.569 - 6452.775: 1.2535% ( 32) 00:07:44.729 6452.775 - 6503.188: 1.7374% ( 83) 00:07:44.729 6503.188 - 6553.600: 2.5886% ( 146) 00:07:44.729 6553.600 - 6604.012: 4.1861% ( 274) 00:07:44.730 6604.012 - 6654.425: 6.4191% ( 383) 00:07:44.730 6654.425 - 6704.837: 10.3195% ( 669) 00:07:44.730 6704.837 - 6755.249: 14.8029% ( 769) 00:07:44.730 6755.249 - 6805.662: 19.3680% ( 783) 00:07:44.730 6805.662 - 6856.074: 25.1924% ( 999) 00:07:44.730 6856.074 - 6906.486: 32.8008% ( 1305) 00:07:44.730 6906.486 - 6956.898: 39.9545% ( 1227) 00:07:44.730 6956.898 - 7007.311: 46.0763% ( 1050) 00:07:44.730 7007.311 - 7057.723: 52.1863% ( 1048) 00:07:44.730 7057.723 - 7108.135: 57.3286% ( 882) 00:07:44.730 7108.135 - 7158.548: 61.7013% ( 750) 00:07:44.730 7158.548 - 7208.960: 64.9895% ( 564) 00:07:44.730 7208.960 - 7259.372: 67.5956% ( 447) 00:07:44.730 7259.372 - 7309.785: 69.8344% ( 384) 00:07:44.730 7309.785 - 7360.197: 71.4028% ( 269) 00:07:44.730 7360.197 - 7410.609: 72.9769% ( 270) 00:07:44.730 7410.609 - 7461.022: 74.2304% ( 215) 00:07:44.730 7461.022 - 7511.434: 75.4664% ( 212) 00:07:44.730 7511.434 - 7561.846: 76.5800% ( 191) 00:07:44.730 7561.846 - 7612.258: 77.9443% ( 234) 00:07:44.730 7612.258 - 7662.671: 78.8479% ( 155) 00:07:44.730 7662.671 - 7713.083: 79.7866% ( 161) 00:07:44.730 7713.083 - 7763.495: 80.3463% ( 96) 00:07:44.730 7763.495 - 7813.908: 80.9643% ( 106) 00:07:44.730 7813.908 - 7864.320: 81.7397% ( 133) 00:07:44.730 7864.320 - 7914.732: 82.4160% ( 116) 00:07:44.730 7914.732 - 7965.145: 82.9641% ( 94) 00:07:44.730 7965.145 - 8015.557: 83.5238% ( 96) 00:07:44.730 8015.557 - 8065.969: 84.0777% ( 95) 00:07:44.730 8065.969 - 8116.382: 84.6140% ( 92) 00:07:44.730 8116.382 - 8166.794: 85.4478% ( 143) 00:07:44.730 8166.794 - 8217.206: 86.0833% ( 109) 00:07:44.730 8217.206 - 8267.618: 86.6196% ( 92) 00:07:44.730 8267.618 - 8318.031: 87.0627% ( 76) 00:07:44.730 8318.031 - 8368.443: 87.6341% ( 98) 00:07:44.730 8368.443 - 8418.855: 88.3162% ( 117) 00:07:44.730 8418.855 - 8469.268: 88.8876% ( 98) 00:07:44.730 8469.268 - 8519.680: 89.4764% ( 101) 00:07:44.730 8519.680 - 8570.092: 89.9487% ( 81) 00:07:44.730 8570.092 - 8620.505: 90.4326% ( 83) 00:07:44.730 8620.505 - 8670.917: 91.1905% ( 130) 00:07:44.730 8670.917 - 8721.329: 91.7502% ( 96) 00:07:44.730 8721.329 - 8771.742: 92.2516% ( 86) 00:07:44.730 8771.742 - 8822.154: 92.6947% ( 76) 00:07:44.730 8822.154 - 8872.566: 93.1320% ( 75) 00:07:44.730 8872.566 - 8922.978: 93.5576% ( 73) 00:07:44.730 8922.978 - 8973.391: 93.9307% ( 64) 00:07:44.730 8973.391 - 9023.803: 94.3097% ( 65) 00:07:44.730 9023.803 - 9074.215: 94.6070% ( 51) 00:07:44.730 9074.215 - 9124.628: 94.8519% ( 42) 00:07:44.730 9124.628 - 9175.040: 95.0735% ( 38) 00:07:44.730 9175.040 - 9225.452: 95.3417% ( 46) 00:07:44.730 9225.452 - 9275.865: 95.6448% ( 52) 00:07:44.730 9275.865 - 9326.277: 95.7673% ( 21) 00:07:44.730 9326.277 - 9376.689: 95.8664% ( 17) 00:07:44.730 9376.689 - 9427.102: 96.0005% ( 23) 00:07:44.730 9427.102 - 9477.514: 96.1404% ( 24) 00:07:44.730 9477.514 - 9527.926: 96.2978% ( 27) 00:07:44.730 9527.926 - 9578.338: 96.4202% ( 21) 00:07:44.730 9578.338 - 9628.751: 96.5718% ( 26) 00:07:44.730 9628.751 - 9679.163: 96.6943% ( 21) 00:07:44.730 9679.163 - 9729.575: 96.8050% ( 19) 00:07:44.730 9729.575 - 9779.988: 96.8983% ( 16) 00:07:44.730 9779.988 - 9830.400: 97.0266% ( 22) 00:07:44.730 9830.400 - 9880.812: 97.1432% ( 20) 00:07:44.730 9880.812 - 9931.225: 97.2190% ( 13) 00:07:44.730 9931.225 - 9981.637: 97.2831% ( 11) 00:07:44.730 9981.637 - 10032.049: 97.3764% ( 16) 00:07:44.730 10032.049 - 10082.462: 97.5280% ( 26) 00:07:44.730 10082.462 - 10132.874: 97.7379% ( 36) 00:07:44.730 10132.874 - 10183.286: 97.8836% ( 25) 00:07:44.730 10183.286 - 10233.698: 98.0119% ( 22) 00:07:44.730 10233.698 - 10284.111: 98.0702% ( 10) 00:07:44.730 10284.111 - 10334.523: 98.1227% ( 9) 00:07:44.730 10334.523 - 10384.935: 98.1693% ( 8) 00:07:44.730 10384.935 - 10435.348: 98.2101% ( 7) 00:07:44.730 10435.348 - 10485.760: 98.2451% ( 6) 00:07:44.730 10485.760 - 10536.172: 98.2859% ( 7) 00:07:44.730 10536.172 - 10586.585: 98.3326% ( 8) 00:07:44.730 10586.585 - 10636.997: 98.4142% ( 14) 00:07:44.730 10636.997 - 10687.409: 98.4958% ( 14) 00:07:44.730 10687.409 - 10737.822: 98.5366% ( 7) 00:07:44.730 10737.822 - 10788.234: 98.5658% ( 5) 00:07:44.730 10788.234 - 10838.646: 98.5774% ( 2) 00:07:44.730 10838.646 - 10889.058: 98.5949% ( 3) 00:07:44.730 10889.058 - 10939.471: 98.6066% ( 2) 00:07:44.730 10939.471 - 10989.883: 98.6299% ( 4) 00:07:44.730 10989.883 - 11040.295: 98.6357% ( 1) 00:07:44.730 11040.295 - 11090.708: 98.6590% ( 4) 00:07:44.730 11090.708 - 11141.120: 98.7348% ( 13) 00:07:44.730 11141.120 - 11191.532: 98.7407% ( 1) 00:07:44.730 11191.532 - 11241.945: 98.7523% ( 2) 00:07:44.730 11241.945 - 11292.357: 98.7698% ( 3) 00:07:44.730 11292.357 - 11342.769: 98.7815% ( 2) 00:07:44.730 11342.769 - 11393.182: 98.7931% ( 2) 00:07:44.730 11393.182 - 11443.594: 98.8048% ( 2) 00:07:44.730 11443.594 - 11494.006: 98.8165% ( 2) 00:07:44.730 11494.006 - 11544.418: 98.8281% ( 2) 00:07:44.730 11544.418 - 11594.831: 98.8398% ( 2) 00:07:44.730 11594.831 - 11645.243: 98.8981% ( 10) 00:07:44.730 11645.243 - 11695.655: 99.0205% ( 21) 00:07:44.730 11695.655 - 11746.068: 99.1721% ( 26) 00:07:44.730 11746.068 - 11796.480: 99.2246% ( 9) 00:07:44.730 11796.480 - 11846.892: 99.2304% ( 1) 00:07:44.730 11846.892 - 11897.305: 99.2362% ( 1) 00:07:44.730 11897.305 - 11947.717: 99.2479% ( 2) 00:07:44.730 11947.717 - 11998.129: 99.2537% ( 1) 00:07:44.730 23492.135 - 23592.960: 99.2596% ( 1) 00:07:44.730 23592.960 - 23693.785: 99.2771% ( 3) 00:07:44.730 23693.785 - 23794.609: 99.2945% ( 3) 00:07:44.730 23794.609 - 23895.434: 99.3120% ( 3) 00:07:44.730 23895.434 - 23996.258: 99.3354% ( 4) 00:07:44.730 23996.258 - 24097.083: 99.3528% ( 3) 00:07:44.730 24097.083 - 24197.908: 99.3703% ( 3) 00:07:44.730 24197.908 - 24298.732: 99.3878% ( 3) 00:07:44.730 24298.732 - 24399.557: 99.4111% ( 4) 00:07:44.730 24399.557 - 24500.382: 99.4286% ( 3) 00:07:44.730 24500.382 - 24601.206: 99.4520% ( 4) 00:07:44.730 24601.206 - 24702.031: 99.4694% ( 3) 00:07:44.730 24702.031 - 24802.855: 99.4869% ( 3) 00:07:44.730 24802.855 - 24903.680: 99.5044% ( 3) 00:07:44.730 24903.680 - 25004.505: 99.5219% ( 3) 00:07:44.730 25004.505 - 25105.329: 99.5394% ( 3) 00:07:44.730 25105.329 - 25206.154: 99.5627% ( 4) 00:07:44.730 25206.154 - 25306.978: 99.5802% ( 3) 00:07:44.730 25306.978 - 25407.803: 99.5977% ( 3) 00:07:44.730 25407.803 - 25508.628: 99.6210% ( 4) 00:07:44.730 25508.628 - 25609.452: 99.6269% ( 1) 00:07:44.730 30852.332 - 31053.982: 99.6560% ( 5) 00:07:44.730 31053.982 - 31255.631: 99.6910% ( 6) 00:07:44.730 31255.631 - 31457.280: 99.7201% ( 5) 00:07:44.730 31457.280 - 31658.929: 99.7551% ( 6) 00:07:44.730 31658.929 - 31860.578: 99.7901% ( 6) 00:07:44.730 31860.578 - 32062.228: 99.8193% ( 5) 00:07:44.730 32062.228 - 32263.877: 99.8542% ( 6) 00:07:44.730 32263.877 - 32465.526: 99.8951% ( 7) 00:07:44.730 32465.526 - 32667.175: 99.9359% ( 7) 00:07:44.730 32667.175 - 32868.825: 99.9825% ( 8) 00:07:44.730 32868.825 - 33070.474: 100.0000% ( 3) 00:07:44.730 00:07:44.730 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:44.730 ============================================================================== 00:07:44.730 Range in us Cumulative IO count 00:07:44.730 5973.858 - 5999.065: 0.0058% ( 1) 00:07:44.730 5999.065 - 6024.271: 0.0117% ( 1) 00:07:44.730 6024.271 - 6049.477: 0.0175% ( 1) 00:07:44.730 6074.683 - 6099.889: 0.0350% ( 3) 00:07:44.730 6099.889 - 6125.095: 0.0700% ( 6) 00:07:44.730 6125.095 - 6150.302: 0.0933% ( 4) 00:07:44.730 6150.302 - 6175.508: 0.1166% ( 4) 00:07:44.730 6175.508 - 6200.714: 0.1574% ( 7) 00:07:44.730 6200.714 - 6225.920: 0.1866% ( 5) 00:07:44.730 6225.920 - 6251.126: 0.2332% ( 8) 00:07:44.730 6251.126 - 6276.332: 0.3032% ( 12) 00:07:44.730 6276.332 - 6301.538: 0.5422% ( 41) 00:07:44.730 6301.538 - 6326.745: 0.6413% ( 17) 00:07:44.730 6326.745 - 6351.951: 0.7171% ( 13) 00:07:44.730 6351.951 - 6377.157: 0.8396% ( 21) 00:07:44.730 6377.157 - 6402.363: 0.9853% ( 25) 00:07:44.730 6402.363 - 6427.569: 1.1835% ( 34) 00:07:44.730 6427.569 - 6452.775: 1.4284% ( 42) 00:07:44.730 6452.775 - 6503.188: 2.0639% ( 109) 00:07:44.730 6503.188 - 6553.600: 3.1308% ( 183) 00:07:44.730 6553.600 - 6604.012: 4.7575% ( 279) 00:07:44.730 6604.012 - 6654.425: 7.0604% ( 395) 00:07:44.730 6654.425 - 6704.837: 10.4594% ( 583) 00:07:44.730 6704.837 - 6755.249: 15.0361% ( 785) 00:07:44.730 6755.249 - 6805.662: 20.1609% ( 879) 00:07:44.730 6805.662 - 6856.074: 26.5392% ( 1094) 00:07:44.730 6856.074 - 6906.486: 32.4569% ( 1015) 00:07:44.730 6906.486 - 6956.898: 38.2929% ( 1001) 00:07:44.730 6956.898 - 7007.311: 44.6653% ( 1093) 00:07:44.730 7007.311 - 7057.723: 50.4489% ( 992) 00:07:44.730 7057.723 - 7108.135: 55.5912% ( 882) 00:07:44.730 7108.135 - 7158.548: 60.3486% ( 816) 00:07:44.730 7158.548 - 7208.960: 63.7593% ( 585) 00:07:44.730 7208.960 - 7259.372: 66.7794% ( 518) 00:07:44.730 7259.372 - 7309.785: 68.9307% ( 369) 00:07:44.730 7309.785 - 7360.197: 70.7031% ( 304) 00:07:44.731 7360.197 - 7410.609: 72.6154% ( 328) 00:07:44.731 7410.609 - 7461.022: 74.1138% ( 257) 00:07:44.731 7461.022 - 7511.434: 75.3965% ( 220) 00:07:44.731 7511.434 - 7561.846: 76.7899% ( 239) 00:07:44.731 7561.846 - 7612.258: 77.7810% ( 170) 00:07:44.731 7612.258 - 7662.671: 78.5798% ( 137) 00:07:44.731 7662.671 - 7713.083: 79.2794% ( 120) 00:07:44.731 7713.083 - 7763.495: 80.0082% ( 125) 00:07:44.731 7763.495 - 7813.908: 80.8186% ( 139) 00:07:44.731 7813.908 - 7864.320: 81.9321% ( 191) 00:07:44.731 7864.320 - 7914.732: 82.7250% ( 136) 00:07:44.731 7914.732 - 7965.145: 83.7045% ( 168) 00:07:44.731 7965.145 - 8015.557: 84.4625% ( 130) 00:07:44.731 8015.557 - 8065.969: 84.9755% ( 88) 00:07:44.731 8065.969 - 8116.382: 85.5469% ( 98) 00:07:44.731 8116.382 - 8166.794: 86.3106% ( 131) 00:07:44.731 8166.794 - 8217.206: 86.8528% ( 93) 00:07:44.731 8217.206 - 8267.618: 87.2318% ( 65) 00:07:44.731 8267.618 - 8318.031: 87.6866% ( 78) 00:07:44.731 8318.031 - 8368.443: 88.1938% ( 87) 00:07:44.731 8368.443 - 8418.855: 88.8118% ( 106) 00:07:44.731 8418.855 - 8469.268: 89.4240% ( 105) 00:07:44.731 8469.268 - 8519.680: 89.9487% ( 90) 00:07:44.731 8519.680 - 8570.092: 90.2752% ( 56) 00:07:44.731 8570.092 - 8620.505: 90.7824% ( 87) 00:07:44.731 8620.505 - 8670.917: 91.4062% ( 107) 00:07:44.731 8670.917 - 8721.329: 91.8027% ( 68) 00:07:44.731 8721.329 - 8771.742: 92.2983% ( 85) 00:07:44.731 8771.742 - 8822.154: 92.7239% ( 73) 00:07:44.731 8822.154 - 8872.566: 93.3652% ( 110) 00:07:44.731 8872.566 - 8922.978: 93.8783% ( 88) 00:07:44.731 8922.978 - 8973.391: 94.1756% ( 51) 00:07:44.731 8973.391 - 9023.803: 94.4729% ( 51) 00:07:44.731 9023.803 - 9074.215: 94.7470% ( 47) 00:07:44.731 9074.215 - 9124.628: 95.0152% ( 46) 00:07:44.731 9124.628 - 9175.040: 95.3417% ( 56) 00:07:44.731 9175.040 - 9225.452: 95.5166% ( 30) 00:07:44.731 9225.452 - 9275.865: 95.7148% ( 34) 00:07:44.731 9275.865 - 9326.277: 95.8139% ( 17) 00:07:44.731 9326.277 - 9376.689: 95.9363% ( 21) 00:07:44.731 9376.689 - 9427.102: 96.0996% ( 28) 00:07:44.731 9427.102 - 9477.514: 96.3270% ( 39) 00:07:44.731 9477.514 - 9527.926: 96.5310% ( 35) 00:07:44.731 9527.926 - 9578.338: 96.6476% ( 20) 00:07:44.731 9578.338 - 9628.751: 96.7467% ( 17) 00:07:44.731 9628.751 - 9679.163: 96.8400% ( 16) 00:07:44.731 9679.163 - 9729.575: 96.9216% ( 14) 00:07:44.731 9729.575 - 9779.988: 97.0441% ( 21) 00:07:44.731 9779.988 - 9830.400: 97.1665% ( 21) 00:07:44.731 9830.400 - 9880.812: 97.2481% ( 14) 00:07:44.731 9880.812 - 9931.225: 97.2889% ( 7) 00:07:44.731 9931.225 - 9981.637: 97.3239% ( 6) 00:07:44.731 9981.637 - 10032.049: 97.3764% ( 9) 00:07:44.731 10032.049 - 10082.462: 97.4114% ( 6) 00:07:44.731 10082.462 - 10132.874: 97.4755% ( 11) 00:07:44.731 10132.874 - 10183.286: 97.5513% ( 13) 00:07:44.731 10183.286 - 10233.698: 97.6446% ( 16) 00:07:44.731 10233.698 - 10284.111: 97.7495% ( 18) 00:07:44.731 10284.111 - 10334.523: 97.8545% ( 18) 00:07:44.731 10334.523 - 10384.935: 98.0410% ( 32) 00:07:44.731 10384.935 - 10435.348: 98.1576% ( 20) 00:07:44.731 10435.348 - 10485.760: 98.2509% ( 16) 00:07:44.731 10485.760 - 10536.172: 98.3092% ( 10) 00:07:44.731 10536.172 - 10586.585: 98.3500% ( 7) 00:07:44.731 10586.585 - 10636.997: 98.3675% ( 3) 00:07:44.731 10636.997 - 10687.409: 98.4025% ( 6) 00:07:44.731 10687.409 - 10737.822: 98.4550% ( 9) 00:07:44.731 10737.822 - 10788.234: 98.5308% ( 13) 00:07:44.731 10788.234 - 10838.646: 98.6007% ( 12) 00:07:44.731 10838.646 - 10889.058: 98.6649% ( 11) 00:07:44.731 10889.058 - 10939.471: 98.7348% ( 12) 00:07:44.731 10939.471 - 10989.883: 98.7815% ( 8) 00:07:44.731 10989.883 - 11040.295: 98.8281% ( 8) 00:07:44.731 11040.295 - 11090.708: 98.8689% ( 7) 00:07:44.731 11090.708 - 11141.120: 98.9214% ( 9) 00:07:44.731 11141.120 - 11191.532: 99.0089% ( 15) 00:07:44.731 11191.532 - 11241.945: 99.0672% ( 10) 00:07:44.731 11241.945 - 11292.357: 99.0905% ( 4) 00:07:44.731 11292.357 - 11342.769: 99.1138% ( 4) 00:07:44.731 11342.769 - 11393.182: 99.1371% ( 4) 00:07:44.731 11393.182 - 11443.594: 99.1546% ( 3) 00:07:44.731 11443.594 - 11494.006: 99.1604% ( 1) 00:07:44.731 11494.006 - 11544.418: 99.1721% ( 2) 00:07:44.731 11544.418 - 11594.831: 99.1779% ( 1) 00:07:44.731 11594.831 - 11645.243: 99.1896% ( 2) 00:07:44.731 11645.243 - 11695.655: 99.2013% ( 2) 00:07:44.731 11695.655 - 11746.068: 99.2071% ( 1) 00:07:44.731 11746.068 - 11796.480: 99.2188% ( 2) 00:07:44.731 11796.480 - 11846.892: 99.2304% ( 2) 00:07:44.731 11846.892 - 11897.305: 99.2421% ( 2) 00:07:44.731 11897.305 - 11947.717: 99.2479% ( 1) 00:07:44.731 11947.717 - 11998.129: 99.2537% ( 1) 00:07:44.731 22383.065 - 22483.889: 99.2596% ( 1) 00:07:44.731 22483.889 - 22584.714: 99.2771% ( 3) 00:07:44.731 22584.714 - 22685.538: 99.2887% ( 2) 00:07:44.731 22685.538 - 22786.363: 99.3062% ( 3) 00:07:44.731 22786.363 - 22887.188: 99.3237% ( 3) 00:07:44.731 22887.188 - 22988.012: 99.3412% ( 3) 00:07:44.731 22988.012 - 23088.837: 99.3587% ( 3) 00:07:44.731 23088.837 - 23189.662: 99.3762% ( 3) 00:07:44.731 23189.662 - 23290.486: 99.3937% ( 3) 00:07:44.731 23290.486 - 23391.311: 99.4111% ( 3) 00:07:44.731 23391.311 - 23492.135: 99.4228% ( 2) 00:07:44.731 23492.135 - 23592.960: 99.4403% ( 3) 00:07:44.731 23592.960 - 23693.785: 99.4578% ( 3) 00:07:44.731 23693.785 - 23794.609: 99.4753% ( 3) 00:07:44.731 23794.609 - 23895.434: 99.4928% ( 3) 00:07:44.731 23895.434 - 23996.258: 99.5103% ( 3) 00:07:44.731 23996.258 - 24097.083: 99.5278% ( 3) 00:07:44.731 24097.083 - 24197.908: 99.5452% ( 3) 00:07:44.731 24197.908 - 24298.732: 99.5627% ( 3) 00:07:44.731 24298.732 - 24399.557: 99.5802% ( 3) 00:07:44.731 24399.557 - 24500.382: 99.5977% ( 3) 00:07:44.731 24500.382 - 24601.206: 99.6152% ( 3) 00:07:44.731 24601.206 - 24702.031: 99.6269% ( 2) 00:07:44.731 29642.437 - 29844.086: 99.6444% ( 3) 00:07:44.731 29844.086 - 30045.735: 99.6852% ( 7) 00:07:44.731 30045.735 - 30247.385: 99.7201% ( 6) 00:07:44.731 30247.385 - 30449.034: 99.7551% ( 6) 00:07:44.731 30449.034 - 30650.683: 99.7843% ( 5) 00:07:44.731 30650.683 - 30852.332: 99.8193% ( 6) 00:07:44.731 30852.332 - 31053.982: 99.8542% ( 6) 00:07:44.731 31053.982 - 31255.631: 99.8892% ( 6) 00:07:44.731 31255.631 - 31457.280: 99.9300% ( 7) 00:07:44.731 31457.280 - 31658.929: 99.9708% ( 7) 00:07:44.731 31658.929 - 31860.578: 100.0000% ( 5) 00:07:44.731 00:07:44.731 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:44.731 ============================================================================== 00:07:44.731 Range in us Cumulative IO count 00:07:44.731 5948.652 - 5973.858: 0.0058% ( 1) 00:07:44.731 6024.271 - 6049.477: 0.0175% ( 2) 00:07:44.731 6049.477 - 6074.683: 0.0408% ( 4) 00:07:44.731 6074.683 - 6099.889: 0.0641% ( 4) 00:07:44.731 6099.889 - 6125.095: 0.0816% ( 3) 00:07:44.731 6125.095 - 6150.302: 0.1283% ( 8) 00:07:44.731 6150.302 - 6175.508: 0.1516% ( 4) 00:07:44.731 6175.508 - 6200.714: 0.1982% ( 8) 00:07:44.731 6200.714 - 6225.920: 0.2215% ( 4) 00:07:44.731 6225.920 - 6251.126: 0.2740% ( 9) 00:07:44.731 6251.126 - 6276.332: 0.3265% ( 9) 00:07:44.731 6276.332 - 6301.538: 0.3906% ( 11) 00:07:44.731 6301.538 - 6326.745: 0.4664% ( 13) 00:07:44.731 6326.745 - 6351.951: 0.6938% ( 39) 00:07:44.731 6351.951 - 6377.157: 0.7812% ( 15) 00:07:44.731 6377.157 - 6402.363: 0.8979% ( 20) 00:07:44.731 6402.363 - 6427.569: 1.0669% ( 29) 00:07:44.731 6427.569 - 6452.775: 1.2185% ( 26) 00:07:44.731 6452.775 - 6503.188: 1.8015% ( 100) 00:07:44.731 6503.188 - 6553.600: 2.7694% ( 166) 00:07:44.731 6553.600 - 6604.012: 4.2561% ( 255) 00:07:44.731 6604.012 - 6654.425: 6.5124% ( 387) 00:07:44.731 6654.425 - 6704.837: 9.6782% ( 543) 00:07:44.731 6704.837 - 6755.249: 14.0800% ( 755) 00:07:44.731 6755.249 - 6805.662: 20.1201% ( 1036) 00:07:44.731 6805.662 - 6856.074: 26.7083% ( 1130) 00:07:44.731 6856.074 - 6906.486: 32.7076% ( 1029) 00:07:44.731 6906.486 - 6956.898: 39.6747% ( 1195) 00:07:44.731 6956.898 - 7007.311: 45.7673% ( 1045) 00:07:44.731 7007.311 - 7057.723: 51.2826% ( 946) 00:07:44.731 7057.723 - 7108.135: 56.1159% ( 829) 00:07:44.731 7108.135 - 7158.548: 60.3253% ( 722) 00:07:44.731 7158.548 - 7208.960: 63.2055% ( 494) 00:07:44.731 7208.960 - 7259.372: 66.6628% ( 593) 00:07:44.731 7259.372 - 7309.785: 68.6567% ( 342) 00:07:44.731 7309.785 - 7360.197: 70.3417% ( 289) 00:07:44.731 7360.197 - 7410.609: 72.0732% ( 297) 00:07:44.731 7410.609 - 7461.022: 73.6765% ( 275) 00:07:44.731 7461.022 - 7511.434: 75.2565% ( 271) 00:07:44.731 7511.434 - 7561.846: 76.3526% ( 188) 00:07:44.731 7561.846 - 7612.258: 77.3204% ( 166) 00:07:44.731 7612.258 - 7662.671: 78.1658% ( 145) 00:07:44.731 7662.671 - 7713.083: 79.3902% ( 210) 00:07:44.731 7713.083 - 7763.495: 80.7136% ( 227) 00:07:44.731 7763.495 - 7813.908: 81.6873% ( 167) 00:07:44.731 7813.908 - 7864.320: 82.3519% ( 114) 00:07:44.731 7864.320 - 7914.732: 82.8533% ( 86) 00:07:44.731 7914.732 - 7965.145: 83.3081% ( 78) 00:07:44.731 7965.145 - 8015.557: 83.9436% ( 109) 00:07:44.731 8015.557 - 8065.969: 84.5208% ( 99) 00:07:44.731 8065.969 - 8116.382: 85.0921% ( 98) 00:07:44.731 8116.382 - 8166.794: 85.7276% ( 109) 00:07:44.731 8166.794 - 8217.206: 86.2990% ( 98) 00:07:44.731 8217.206 - 8267.618: 86.9461% ( 111) 00:07:44.731 8267.618 - 8318.031: 87.5466% ( 103) 00:07:44.732 8318.031 - 8368.443: 87.9956% ( 77) 00:07:44.732 8368.443 - 8418.855: 88.4095% ( 71) 00:07:44.732 8418.855 - 8469.268: 88.9809% ( 98) 00:07:44.732 8469.268 - 8519.680: 89.4939% ( 88) 00:07:44.732 8519.680 - 8570.092: 90.0886% ( 102) 00:07:44.732 8570.092 - 8620.505: 90.7066% ( 106) 00:07:44.732 8620.505 - 8670.917: 91.4412% ( 126) 00:07:44.732 8670.917 - 8721.329: 92.1059% ( 114) 00:07:44.732 8721.329 - 8771.742: 92.5781% ( 81) 00:07:44.732 8771.742 - 8822.154: 93.2020% ( 107) 00:07:44.732 8822.154 - 8872.566: 93.7208% ( 89) 00:07:44.732 8872.566 - 8922.978: 94.1931% ( 81) 00:07:44.732 8922.978 - 8973.391: 94.5138% ( 55) 00:07:44.732 8973.391 - 9023.803: 94.8286% ( 54) 00:07:44.732 9023.803 - 9074.215: 95.1784% ( 60) 00:07:44.732 9074.215 - 9124.628: 95.3650% ( 32) 00:07:44.732 9124.628 - 9175.040: 95.5049% ( 24) 00:07:44.732 9175.040 - 9225.452: 95.6215% ( 20) 00:07:44.732 9225.452 - 9275.865: 95.7381% ( 20) 00:07:44.732 9275.865 - 9326.277: 95.8722% ( 23) 00:07:44.732 9326.277 - 9376.689: 96.0529% ( 31) 00:07:44.732 9376.689 - 9427.102: 96.2861% ( 40) 00:07:44.732 9427.102 - 9477.514: 96.5019% ( 37) 00:07:44.732 9477.514 - 9527.926: 96.6709% ( 29) 00:07:44.732 9527.926 - 9578.338: 96.7817% ( 19) 00:07:44.732 9578.338 - 9628.751: 96.8633% ( 14) 00:07:44.732 9628.751 - 9679.163: 96.9158% ( 9) 00:07:44.732 9679.163 - 9729.575: 96.9566% ( 7) 00:07:44.732 9729.575 - 9779.988: 96.9916% ( 6) 00:07:44.732 9779.988 - 9830.400: 97.0324% ( 7) 00:07:44.732 9830.400 - 9880.812: 97.0907% ( 10) 00:07:44.732 9880.812 - 9931.225: 97.1957% ( 18) 00:07:44.732 9931.225 - 9981.637: 97.2773% ( 14) 00:07:44.732 9981.637 - 10032.049: 97.3356% ( 10) 00:07:44.732 10032.049 - 10082.462: 97.3997% ( 11) 00:07:44.732 10082.462 - 10132.874: 97.4639% ( 11) 00:07:44.732 10132.874 - 10183.286: 97.5455% ( 14) 00:07:44.732 10183.286 - 10233.698: 97.6213% ( 13) 00:07:44.732 10233.698 - 10284.111: 97.6796% ( 10) 00:07:44.732 10284.111 - 10334.523: 97.8195% ( 24) 00:07:44.732 10334.523 - 10384.935: 98.0527% ( 40) 00:07:44.732 10384.935 - 10435.348: 98.2509% ( 34) 00:07:44.732 10435.348 - 10485.760: 98.4083% ( 27) 00:07:44.732 10485.760 - 10536.172: 98.5016% ( 16) 00:07:44.732 10536.172 - 10586.585: 98.5599% ( 10) 00:07:44.732 10586.585 - 10636.997: 98.6299% ( 12) 00:07:44.732 10636.997 - 10687.409: 98.6707% ( 7) 00:07:44.732 10687.409 - 10737.822: 98.7407% ( 12) 00:07:44.732 10737.822 - 10788.234: 98.8106% ( 12) 00:07:44.732 10788.234 - 10838.646: 98.8806% ( 12) 00:07:44.732 10838.646 - 10889.058: 98.9156% ( 6) 00:07:44.732 10889.058 - 10939.471: 98.9739% ( 10) 00:07:44.732 10939.471 - 10989.883: 99.1313% ( 27) 00:07:44.732 10989.883 - 11040.295: 99.1488% ( 3) 00:07:44.732 11040.295 - 11090.708: 99.1604% ( 2) 00:07:44.732 11090.708 - 11141.120: 99.1663% ( 1) 00:07:44.732 11141.120 - 11191.532: 99.1779% ( 2) 00:07:44.732 11191.532 - 11241.945: 99.1838% ( 1) 00:07:44.732 11241.945 - 11292.357: 99.1954% ( 2) 00:07:44.732 11292.357 - 11342.769: 99.2013% ( 1) 00:07:44.732 11342.769 - 11393.182: 99.2129% ( 2) 00:07:44.732 11393.182 - 11443.594: 99.2246% ( 2) 00:07:44.732 11443.594 - 11494.006: 99.2362% ( 2) 00:07:44.732 11494.006 - 11544.418: 99.2421% ( 1) 00:07:44.732 11544.418 - 11594.831: 99.2537% ( 2) 00:07:44.732 21979.766 - 22080.591: 99.2654% ( 2) 00:07:44.732 22080.591 - 22181.415: 99.2887% ( 4) 00:07:44.732 22181.415 - 22282.240: 99.3120% ( 4) 00:07:44.732 22282.240 - 22383.065: 99.3354% ( 4) 00:07:44.732 22383.065 - 22483.889: 99.3587% ( 4) 00:07:44.732 22483.889 - 22584.714: 99.3820% ( 4) 00:07:44.732 22584.714 - 22685.538: 99.4053% ( 4) 00:07:44.732 22685.538 - 22786.363: 99.4228% ( 3) 00:07:44.732 22786.363 - 22887.188: 99.4461% ( 4) 00:07:44.732 22887.188 - 22988.012: 99.4636% ( 3) 00:07:44.732 22988.012 - 23088.837: 99.4753% ( 2) 00:07:44.732 23088.837 - 23189.662: 99.4928% ( 3) 00:07:44.732 23189.662 - 23290.486: 99.5044% ( 2) 00:07:44.732 23290.486 - 23391.311: 99.5219% ( 3) 00:07:44.732 23391.311 - 23492.135: 99.5394% ( 3) 00:07:44.732 23492.135 - 23592.960: 99.5511% ( 2) 00:07:44.732 23592.960 - 23693.785: 99.5744% ( 4) 00:07:44.732 23693.785 - 23794.609: 99.5861% ( 2) 00:07:44.732 23794.609 - 23895.434: 99.6094% ( 4) 00:07:44.732 23895.434 - 23996.258: 99.6269% ( 3) 00:07:44.732 27424.295 - 27625.945: 99.6327% ( 1) 00:07:44.732 27625.945 - 27827.594: 99.6677% ( 6) 00:07:44.732 27827.594 - 28029.243: 99.7143% ( 8) 00:07:44.732 28029.243 - 28230.892: 99.7785% ( 11) 00:07:44.732 28230.892 - 28432.542: 99.8134% ( 6) 00:07:44.732 28432.542 - 28634.191: 99.8309% ( 3) 00:07:44.732 28835.840 - 29037.489: 99.8426% ( 2) 00:07:44.732 29037.489 - 29239.138: 99.8834% ( 7) 00:07:44.732 29239.138 - 29440.788: 99.9300% ( 8) 00:07:44.732 29440.788 - 29642.437: 99.9708% ( 7) 00:07:44.732 29642.437 - 29844.086: 100.0000% ( 5) 00:07:44.732 00:07:44.732 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:44.732 ============================================================================== 00:07:44.732 Range in us Cumulative IO count 00:07:44.732 5873.034 - 5898.240: 0.0058% ( 1) 00:07:44.732 6024.271 - 6049.477: 0.0117% ( 1) 00:07:44.732 6049.477 - 6074.683: 0.0350% ( 4) 00:07:44.732 6074.683 - 6099.889: 0.0466% ( 2) 00:07:44.732 6099.889 - 6125.095: 0.0875% ( 7) 00:07:44.732 6125.095 - 6150.302: 0.1224% ( 6) 00:07:44.732 6150.302 - 6175.508: 0.1516% ( 5) 00:07:44.732 6175.508 - 6200.714: 0.1924% ( 7) 00:07:44.732 6200.714 - 6225.920: 0.2215% ( 5) 00:07:44.732 6225.920 - 6251.126: 0.3090% ( 15) 00:07:44.732 6251.126 - 6276.332: 0.4023% ( 16) 00:07:44.732 6276.332 - 6301.538: 0.5014% ( 17) 00:07:44.732 6301.538 - 6326.745: 0.6297% ( 22) 00:07:44.732 6326.745 - 6351.951: 0.7288% ( 17) 00:07:44.732 6351.951 - 6377.157: 0.8454% ( 20) 00:07:44.732 6377.157 - 6402.363: 0.9620% ( 20) 00:07:44.732 6402.363 - 6427.569: 1.1544% ( 33) 00:07:44.732 6427.569 - 6452.775: 1.3584% ( 35) 00:07:44.732 6452.775 - 6503.188: 1.8190% ( 79) 00:07:44.732 6503.188 - 6553.600: 2.6644% ( 145) 00:07:44.732 6553.600 - 6604.012: 4.1278% ( 251) 00:07:44.732 6604.012 - 6654.425: 6.1567% ( 348) 00:07:44.732 6654.425 - 6704.837: 9.5849% ( 588) 00:07:44.732 6704.837 - 6755.249: 13.9459% ( 748) 00:07:44.732 6755.249 - 6805.662: 19.9802% ( 1035) 00:07:44.732 6805.662 - 6856.074: 25.9620% ( 1026) 00:07:44.732 6856.074 - 6906.486: 32.6959% ( 1155) 00:07:44.732 6906.486 - 6956.898: 38.9984% ( 1081) 00:07:44.732 6956.898 - 7007.311: 44.9569% ( 1022) 00:07:44.732 7007.311 - 7057.723: 50.5072% ( 952) 00:07:44.732 7057.723 - 7108.135: 55.8069% ( 909) 00:07:44.732 7108.135 - 7158.548: 60.6635% ( 833) 00:07:44.732 7158.548 - 7208.960: 63.7477% ( 529) 00:07:44.732 7208.960 - 7259.372: 66.2372% ( 427) 00:07:44.732 7259.372 - 7309.785: 68.4876% ( 386) 00:07:44.732 7309.785 - 7360.197: 70.3708% ( 323) 00:07:44.732 7360.197 - 7410.609: 72.0149% ( 282) 00:07:44.732 7410.609 - 7461.022: 73.7407% ( 296) 00:07:44.732 7461.022 - 7511.434: 75.1283% ( 238) 00:07:44.732 7511.434 - 7561.846: 76.6150% ( 255) 00:07:44.732 7561.846 - 7612.258: 77.9443% ( 228) 00:07:44.732 7612.258 - 7662.671: 79.0812% ( 195) 00:07:44.732 7662.671 - 7713.083: 80.1656% ( 186) 00:07:44.732 7713.083 - 7763.495: 80.9935% ( 142) 00:07:44.732 7763.495 - 7813.908: 81.6523% ( 113) 00:07:44.732 7813.908 - 7864.320: 82.4685% ( 140) 00:07:44.732 7864.320 - 7914.732: 82.9058% ( 75) 00:07:44.732 7914.732 - 7965.145: 83.4888% ( 100) 00:07:44.732 7965.145 - 8015.557: 84.2701% ( 134) 00:07:44.732 8015.557 - 8065.969: 84.7773% ( 87) 00:07:44.732 8065.969 - 8116.382: 85.2262% ( 77) 00:07:44.732 8116.382 - 8166.794: 85.8034% ( 99) 00:07:44.732 8166.794 - 8217.206: 86.3923% ( 101) 00:07:44.732 8217.206 - 8267.618: 86.9520% ( 96) 00:07:44.732 8267.618 - 8318.031: 87.7682% ( 140) 00:07:44.732 8318.031 - 8368.443: 88.5028% ( 126) 00:07:44.732 8368.443 - 8418.855: 88.9984% ( 85) 00:07:44.732 8418.855 - 8469.268: 89.6688% ( 115) 00:07:44.732 8469.268 - 8519.680: 90.2285% ( 96) 00:07:44.732 8519.680 - 8570.092: 90.7125% ( 83) 00:07:44.732 8570.092 - 8620.505: 91.2896% ( 99) 00:07:44.732 8620.505 - 8670.917: 91.6220% ( 57) 00:07:44.732 8670.917 - 8721.329: 91.9951% ( 64) 00:07:44.732 8721.329 - 8771.742: 92.4499% ( 78) 00:07:44.732 8771.742 - 8822.154: 92.8930% ( 76) 00:07:44.732 8822.154 - 8872.566: 93.2836% ( 67) 00:07:44.732 8872.566 - 8922.978: 93.6451% ( 62) 00:07:44.732 8922.978 - 8973.391: 94.0357% ( 67) 00:07:44.732 8973.391 - 9023.803: 94.3505% ( 54) 00:07:44.732 9023.803 - 9074.215: 94.8403% ( 84) 00:07:44.732 9074.215 - 9124.628: 95.1318% ( 50) 00:07:44.732 9124.628 - 9175.040: 95.2950% ( 28) 00:07:44.732 9175.040 - 9225.452: 95.4291% ( 23) 00:07:44.732 9225.452 - 9275.865: 95.5690% ( 24) 00:07:44.732 9275.865 - 9326.277: 95.7264% ( 27) 00:07:44.732 9326.277 - 9376.689: 95.9247% ( 34) 00:07:44.732 9376.689 - 9427.102: 96.2045% ( 48) 00:07:44.732 9427.102 - 9477.514: 96.5368% ( 57) 00:07:44.732 9477.514 - 9527.926: 96.6185% ( 14) 00:07:44.732 9527.926 - 9578.338: 96.6943% ( 13) 00:07:44.732 9578.338 - 9628.751: 96.7642% ( 12) 00:07:44.732 9628.751 - 9679.163: 96.8342% ( 12) 00:07:44.733 9679.163 - 9729.575: 96.8750% ( 7) 00:07:44.733 9729.575 - 9779.988: 96.9216% ( 8) 00:07:44.733 9779.988 - 9830.400: 96.9566% ( 6) 00:07:44.733 9830.400 - 9880.812: 96.9974% ( 7) 00:07:44.733 9880.812 - 9931.225: 97.0324% ( 6) 00:07:44.733 9931.225 - 9981.637: 97.0557% ( 4) 00:07:44.733 9981.637 - 10032.049: 97.1199% ( 11) 00:07:44.733 10032.049 - 10082.462: 97.2015% ( 14) 00:07:44.733 10082.462 - 10132.874: 97.3239% ( 21) 00:07:44.733 10132.874 - 10183.286: 97.5396% ( 37) 00:07:44.733 10183.286 - 10233.698: 97.7845% ( 42) 00:07:44.733 10233.698 - 10284.111: 97.9653% ( 31) 00:07:44.733 10284.111 - 10334.523: 98.1285% ( 28) 00:07:44.733 10334.523 - 10384.935: 98.2451% ( 20) 00:07:44.733 10384.935 - 10435.348: 98.3151% ( 12) 00:07:44.733 10435.348 - 10485.760: 98.3909% ( 13) 00:07:44.733 10485.760 - 10536.172: 98.5250% ( 23) 00:07:44.733 10536.172 - 10586.585: 98.7407% ( 37) 00:07:44.733 10586.585 - 10636.997: 98.8165% ( 13) 00:07:44.733 10636.997 - 10687.409: 98.8631% ( 8) 00:07:44.733 10687.409 - 10737.822: 98.9272% ( 11) 00:07:44.733 10737.822 - 10788.234: 98.9622% ( 6) 00:07:44.733 10788.234 - 10838.646: 99.0030% ( 7) 00:07:44.733 10838.646 - 10889.058: 99.1546% ( 26) 00:07:44.733 10889.058 - 10939.471: 99.1663% ( 2) 00:07:44.733 10939.471 - 10989.883: 99.1779% ( 2) 00:07:44.733 10989.883 - 11040.295: 99.1838% ( 1) 00:07:44.733 11040.295 - 11090.708: 99.1954% ( 2) 00:07:44.733 11090.708 - 11141.120: 99.2013% ( 1) 00:07:44.733 11141.120 - 11191.532: 99.2129% ( 2) 00:07:44.733 11191.532 - 11241.945: 99.2188% ( 1) 00:07:44.733 11241.945 - 11292.357: 99.2304% ( 2) 00:07:44.733 11292.357 - 11342.769: 99.2362% ( 1) 00:07:44.733 11342.769 - 11393.182: 99.2479% ( 2) 00:07:44.733 11393.182 - 11443.594: 99.2537% ( 1) 00:07:44.733 20164.923 - 20265.748: 99.2596% ( 1) 00:07:44.733 20870.695 - 20971.520: 99.2771% ( 3) 00:07:44.733 20971.520 - 21072.345: 99.3062% ( 5) 00:07:44.733 21072.345 - 21173.169: 99.3412% ( 6) 00:07:44.733 21173.169 - 21273.994: 99.3645% ( 4) 00:07:44.733 21273.994 - 21374.818: 99.3995% ( 6) 00:07:44.733 21374.818 - 21475.643: 99.4228% ( 4) 00:07:44.733 21475.643 - 21576.468: 99.4403% ( 3) 00:07:44.733 21576.468 - 21677.292: 99.4636% ( 4) 00:07:44.733 21677.292 - 21778.117: 99.4753% ( 2) 00:07:44.733 21778.117 - 21878.942: 99.4986% ( 4) 00:07:44.733 21878.942 - 21979.766: 99.5161% ( 3) 00:07:44.733 21979.766 - 22080.591: 99.5394% ( 4) 00:07:44.733 22080.591 - 22181.415: 99.5569% ( 3) 00:07:44.733 22181.415 - 22282.240: 99.5744% ( 3) 00:07:44.733 22282.240 - 22383.065: 99.5977% ( 4) 00:07:44.733 22383.065 - 22483.889: 99.6152% ( 3) 00:07:44.733 22483.889 - 22584.714: 99.6269% ( 2) 00:07:44.733 26214.400 - 26416.049: 99.6502% ( 4) 00:07:44.733 26416.049 - 26617.698: 99.6968% ( 8) 00:07:44.733 26617.698 - 26819.348: 99.7726% ( 13) 00:07:44.733 26819.348 - 27020.997: 99.8542% ( 14) 00:07:44.733 27020.997 - 27222.646: 99.8834% ( 5) 00:07:44.733 27625.945 - 27827.594: 99.9067% ( 4) 00:07:44.733 27827.594 - 28029.243: 99.9417% ( 6) 00:07:44.733 28029.243 - 28230.892: 99.9767% ( 6) 00:07:44.733 28230.892 - 28432.542: 100.0000% ( 4) 00:07:44.733 00:07:44.733 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:44.733 ============================================================================== 00:07:44.733 Range in us Cumulative IO count 00:07:44.733 5898.240 - 5923.446: 0.0117% ( 2) 00:07:44.733 5948.652 - 5973.858: 0.0175% ( 1) 00:07:44.733 6049.477 - 6074.683: 0.0292% ( 2) 00:07:44.733 6074.683 - 6099.889: 0.0641% ( 6) 00:07:44.733 6099.889 - 6125.095: 0.0875% ( 4) 00:07:44.733 6125.095 - 6150.302: 0.1341% ( 8) 00:07:44.733 6150.302 - 6175.508: 0.1749% ( 7) 00:07:44.733 6175.508 - 6200.714: 0.2157% ( 7) 00:07:44.733 6200.714 - 6225.920: 0.3032% ( 15) 00:07:44.733 6225.920 - 6251.126: 0.4198% ( 20) 00:07:44.733 6251.126 - 6276.332: 0.5247% ( 18) 00:07:44.733 6276.332 - 6301.538: 0.6122% ( 15) 00:07:44.733 6301.538 - 6326.745: 0.7346% ( 21) 00:07:44.733 6326.745 - 6351.951: 0.8104% ( 13) 00:07:44.733 6351.951 - 6377.157: 0.9037% ( 16) 00:07:44.733 6377.157 - 6402.363: 1.0319% ( 22) 00:07:44.733 6402.363 - 6427.569: 1.1835% ( 26) 00:07:44.733 6427.569 - 6452.775: 1.3701% ( 32) 00:07:44.733 6452.775 - 6503.188: 1.8190% ( 77) 00:07:44.733 6503.188 - 6553.600: 2.6586% ( 144) 00:07:44.733 6553.600 - 6604.012: 4.0812% ( 244) 00:07:44.733 6604.012 - 6654.425: 6.3899% ( 396) 00:07:44.733 6654.425 - 6704.837: 9.7715% ( 580) 00:07:44.733 6704.837 - 6755.249: 13.7652% ( 685) 00:07:44.733 6755.249 - 6805.662: 18.5984% ( 829) 00:07:44.733 6805.662 - 6856.074: 24.9067% ( 1082) 00:07:44.733 6856.074 - 6906.486: 32.2062% ( 1252) 00:07:44.733 6906.486 - 6956.898: 39.3773% ( 1230) 00:07:44.733 6956.898 - 7007.311: 45.4816% ( 1047) 00:07:44.733 7007.311 - 7057.723: 51.3759% ( 1011) 00:07:44.733 7057.723 - 7108.135: 56.3141% ( 847) 00:07:44.733 7108.135 - 7158.548: 61.2990% ( 855) 00:07:44.733 7158.548 - 7208.960: 64.2899% ( 513) 00:07:44.733 7208.960 - 7259.372: 66.3071% ( 346) 00:07:44.733 7259.372 - 7309.785: 68.5809% ( 390) 00:07:44.733 7309.785 - 7360.197: 70.6273% ( 351) 00:07:44.733 7360.197 - 7410.609: 72.4872% ( 319) 00:07:44.733 7410.609 - 7461.022: 74.0555% ( 269) 00:07:44.733 7461.022 - 7511.434: 75.6996% ( 282) 00:07:44.733 7511.434 - 7561.846: 76.6325% ( 160) 00:07:44.733 7561.846 - 7612.258: 78.0084% ( 236) 00:07:44.733 7612.258 - 7662.671: 79.0170% ( 173) 00:07:44.733 7662.671 - 7713.083: 79.7400% ( 124) 00:07:44.733 7713.083 - 7763.495: 80.3930% ( 112) 00:07:44.733 7763.495 - 7813.908: 80.9410% ( 94) 00:07:44.733 7813.908 - 7864.320: 81.9321% ( 170) 00:07:44.733 7864.320 - 7914.732: 82.6434% ( 122) 00:07:44.733 7914.732 - 7965.145: 83.4247% ( 134) 00:07:44.733 7965.145 - 8015.557: 84.0718% ( 111) 00:07:44.733 8015.557 - 8065.969: 84.4683% ( 68) 00:07:44.733 8065.969 - 8116.382: 84.9697% ( 86) 00:07:44.733 8116.382 - 8166.794: 85.4536% ( 83) 00:07:44.733 8166.794 - 8217.206: 85.8967% ( 76) 00:07:44.733 8217.206 - 8267.618: 86.6779% ( 134) 00:07:44.733 8267.618 - 8318.031: 87.6224% ( 162) 00:07:44.733 8318.031 - 8368.443: 88.6311% ( 173) 00:07:44.733 8368.443 - 8418.855: 89.4473% ( 140) 00:07:44.733 8418.855 - 8469.268: 90.0012% ( 95) 00:07:44.733 8469.268 - 8519.680: 90.6425% ( 110) 00:07:44.733 8519.680 - 8570.092: 91.3130% ( 115) 00:07:44.733 8570.092 - 8620.505: 91.7736% ( 79) 00:07:44.733 8620.505 - 8670.917: 92.1933% ( 72) 00:07:44.733 8670.917 - 8721.329: 92.5781% ( 66) 00:07:44.733 8721.329 - 8771.742: 92.8230% ( 42) 00:07:44.733 8771.742 - 8822.154: 92.9979% ( 30) 00:07:44.733 8822.154 - 8872.566: 93.2020% ( 35) 00:07:44.733 8872.566 - 8922.978: 93.4527% ( 43) 00:07:44.733 8922.978 - 8973.391: 93.7908% ( 58) 00:07:44.733 8973.391 - 9023.803: 94.1523% ( 62) 00:07:44.733 9023.803 - 9074.215: 94.4496% ( 51) 00:07:44.733 9074.215 - 9124.628: 94.7645% ( 54) 00:07:44.733 9124.628 - 9175.040: 94.9918% ( 39) 00:07:44.733 9175.040 - 9225.452: 95.1376% ( 25) 00:07:44.733 9225.452 - 9275.865: 95.2833% ( 25) 00:07:44.733 9275.865 - 9326.277: 95.5166% ( 40) 00:07:44.733 9326.277 - 9376.689: 95.7264% ( 36) 00:07:44.733 9376.689 - 9427.102: 95.8955% ( 29) 00:07:44.733 9427.102 - 9477.514: 96.0529% ( 27) 00:07:44.733 9477.514 - 9527.926: 96.2045% ( 26) 00:07:44.733 9527.926 - 9578.338: 96.3853% ( 31) 00:07:44.733 9578.338 - 9628.751: 96.4902% ( 18) 00:07:44.733 9628.751 - 9679.163: 96.5893% ( 17) 00:07:44.733 9679.163 - 9729.575: 96.6535% ( 11) 00:07:44.733 9729.575 - 9779.988: 96.7992% ( 25) 00:07:44.733 9779.988 - 9830.400: 96.9625% ( 28) 00:07:44.733 9830.400 - 9880.812: 97.1315% ( 29) 00:07:44.733 9880.812 - 9931.225: 97.3064% ( 30) 00:07:44.734 9931.225 - 9981.637: 97.4289% ( 21) 00:07:44.734 9981.637 - 10032.049: 97.5571% ( 22) 00:07:44.734 10032.049 - 10082.462: 97.6796% ( 21) 00:07:44.734 10082.462 - 10132.874: 97.7495% ( 12) 00:07:44.734 10132.874 - 10183.286: 97.8312% ( 14) 00:07:44.734 10183.286 - 10233.698: 97.9419% ( 19) 00:07:44.734 10233.698 - 10284.111: 98.0469% ( 18) 00:07:44.734 10284.111 - 10334.523: 98.1635% ( 20) 00:07:44.734 10334.523 - 10384.935: 98.3209% ( 27) 00:07:44.734 10384.935 - 10435.348: 98.5075% ( 32) 00:07:44.734 10435.348 - 10485.760: 98.6182% ( 19) 00:07:44.734 10485.760 - 10536.172: 98.7115% ( 16) 00:07:44.734 10536.172 - 10586.585: 98.8165% ( 18) 00:07:44.734 10586.585 - 10636.997: 98.9331% ( 20) 00:07:44.734 10636.997 - 10687.409: 98.9972% ( 11) 00:07:44.734 10687.409 - 10737.822: 99.0380% ( 7) 00:07:44.734 10737.822 - 10788.234: 99.0730% ( 6) 00:07:44.734 10788.234 - 10838.646: 99.0963% ( 4) 00:07:44.734 10838.646 - 10889.058: 99.1255% ( 5) 00:07:44.734 10889.058 - 10939.471: 99.1488% ( 4) 00:07:44.734 10939.471 - 10989.883: 99.1663% ( 3) 00:07:44.734 10989.883 - 11040.295: 99.1779% ( 2) 00:07:44.734 11040.295 - 11090.708: 99.1896% ( 2) 00:07:44.734 11090.708 - 11141.120: 99.2071% ( 3) 00:07:44.734 11141.120 - 11191.532: 99.2188% ( 2) 00:07:44.734 11191.532 - 11241.945: 99.2304% ( 2) 00:07:44.734 11241.945 - 11292.357: 99.2421% ( 2) 00:07:44.734 11292.357 - 11342.769: 99.2537% ( 2) 00:07:44.734 19055.852 - 19156.677: 99.2596% ( 1) 00:07:44.734 19963.274 - 20064.098: 99.2712% ( 2) 00:07:44.734 20064.098 - 20164.923: 99.2945% ( 4) 00:07:44.734 20164.923 - 20265.748: 99.3179% ( 4) 00:07:44.734 20265.748 - 20366.572: 99.3412% ( 4) 00:07:44.734 20366.572 - 20467.397: 99.3703% ( 5) 00:07:44.734 20467.397 - 20568.222: 99.4986% ( 22) 00:07:44.734 20568.222 - 20669.046: 99.5161% ( 3) 00:07:44.734 20669.046 - 20769.871: 99.5336% ( 3) 00:07:44.734 20769.871 - 20870.695: 99.5511% ( 3) 00:07:44.734 20870.695 - 20971.520: 99.5686% ( 3) 00:07:44.734 20971.520 - 21072.345: 99.5802% ( 2) 00:07:44.734 21072.345 - 21173.169: 99.5977% ( 3) 00:07:44.734 21173.169 - 21273.994: 99.6152% ( 3) 00:07:44.734 21273.994 - 21374.818: 99.6269% ( 2) 00:07:44.734 24399.557 - 24500.382: 99.6502% ( 4) 00:07:44.734 24500.382 - 24601.206: 99.6735% ( 4) 00:07:44.734 24601.206 - 24702.031: 99.6910% ( 3) 00:07:44.734 24702.031 - 24802.855: 99.7143% ( 4) 00:07:44.734 24802.855 - 24903.680: 99.7376% ( 4) 00:07:44.734 24903.680 - 25004.505: 99.7610% ( 4) 00:07:44.734 25004.505 - 25105.329: 99.7785% ( 3) 00:07:44.734 25105.329 - 25206.154: 99.8018% ( 4) 00:07:44.734 25206.154 - 25306.978: 99.8251% ( 4) 00:07:44.734 25306.978 - 25407.803: 99.8484% ( 4) 00:07:44.734 25407.803 - 25508.628: 99.9067% ( 10) 00:07:44.734 25508.628 - 25609.452: 99.9184% ( 2) 00:07:44.734 25609.452 - 25710.277: 99.9417% ( 4) 00:07:44.734 25710.277 - 25811.102: 99.9592% ( 3) 00:07:44.734 25811.102 - 26012.751: 99.9942% ( 6) 00:07:44.734 26012.751 - 26214.400: 100.0000% ( 1) 00:07:44.734 00:07:44.734 23:51:51 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:44.734 00:07:44.734 real 0m2.548s 00:07:44.734 user 0m2.221s 00:07:44.734 sys 0m0.213s 00:07:44.734 23:51:51 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.734 23:51:51 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:44.734 ************************************ 00:07:44.734 END TEST nvme_perf 00:07:44.734 ************************************ 00:07:44.734 23:51:51 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:44.734 23:51:51 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:44.734 23:51:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.734 23:51:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.734 ************************************ 00:07:44.734 START TEST nvme_hello_world 00:07:44.734 ************************************ 00:07:44.734 23:51:51 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:44.734 Initializing NVMe Controllers 00:07:44.734 Attached to 0000:00:10.0 00:07:44.734 Namespace ID: 1 size: 6GB 00:07:44.734 Attached to 0000:00:11.0 00:07:44.734 Namespace ID: 1 size: 5GB 00:07:44.734 Attached to 0000:00:13.0 00:07:44.734 Namespace ID: 1 size: 1GB 00:07:44.734 Attached to 0000:00:12.0 00:07:44.734 Namespace ID: 1 size: 4GB 00:07:44.734 Namespace ID: 2 size: 4GB 00:07:44.734 Namespace ID: 3 size: 4GB 00:07:44.734 Initialization complete. 00:07:44.734 INFO: using host memory buffer for IO 00:07:44.734 Hello world! 00:07:44.734 INFO: using host memory buffer for IO 00:07:44.734 Hello world! 00:07:44.734 INFO: using host memory buffer for IO 00:07:44.734 Hello world! 00:07:44.734 INFO: using host memory buffer for IO 00:07:44.734 Hello world! 00:07:44.734 INFO: using host memory buffer for IO 00:07:44.734 Hello world! 00:07:44.734 INFO: using host memory buffer for IO 00:07:44.734 Hello world! 00:07:44.734 00:07:44.734 real 0m0.233s 00:07:44.734 user 0m0.080s 00:07:44.734 sys 0m0.102s 00:07:44.734 ************************************ 00:07:44.734 END TEST nvme_hello_world 00:07:44.734 ************************************ 00:07:44.734 23:51:51 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.734 23:51:51 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:44.734 23:51:51 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:44.734 23:51:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.734 23:51:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.734 23:51:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.734 ************************************ 00:07:44.734 START TEST nvme_sgl 00:07:44.734 ************************************ 00:07:44.734 23:51:51 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:44.992 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:44.993 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:44.993 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:44.993 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:44.993 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:44.993 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:44.993 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:44.993 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:44.993 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:44.993 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:44.993 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:44.993 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:44.993 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:44.993 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:44.993 NVMe Readv/Writev Request test 00:07:44.993 Attached to 0000:00:10.0 00:07:44.993 Attached to 0000:00:11.0 00:07:44.993 Attached to 0000:00:13.0 00:07:44.993 Attached to 0000:00:12.0 00:07:44.993 0000:00:10.0: build_io_request_2 test passed 00:07:44.993 0000:00:10.0: build_io_request_4 test passed 00:07:44.993 0000:00:10.0: build_io_request_5 test passed 00:07:44.993 0000:00:10.0: build_io_request_6 test passed 00:07:44.993 0000:00:10.0: build_io_request_7 test passed 00:07:44.993 0000:00:10.0: build_io_request_10 test passed 00:07:44.993 0000:00:11.0: build_io_request_2 test passed 00:07:44.993 0000:00:11.0: build_io_request_4 test passed 00:07:44.993 0000:00:11.0: build_io_request_5 test passed 00:07:44.993 0000:00:11.0: build_io_request_6 test passed 00:07:44.993 0000:00:11.0: build_io_request_7 test passed 00:07:44.993 0000:00:11.0: build_io_request_10 test passed 00:07:44.993 Cleaning up... 00:07:44.993 00:07:44.993 real 0m0.296s 00:07:44.993 user 0m0.150s 00:07:44.993 sys 0m0.100s 00:07:44.993 23:51:51 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.993 23:51:51 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:44.993 ************************************ 00:07:44.993 END TEST nvme_sgl 00:07:44.993 ************************************ 00:07:45.251 23:51:51 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:45.251 23:51:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.251 23:51:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.251 23:51:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.251 ************************************ 00:07:45.251 START TEST nvme_e2edp 00:07:45.251 ************************************ 00:07:45.251 23:51:51 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:45.251 NVMe Write/Read with End-to-End data protection test 00:07:45.251 Attached to 0000:00:10.0 00:07:45.251 Attached to 0000:00:11.0 00:07:45.251 Attached to 0000:00:13.0 00:07:45.251 Attached to 0000:00:12.0 00:07:45.251 Cleaning up... 00:07:45.251 ************************************ 00:07:45.251 END TEST nvme_e2edp 00:07:45.251 ************************************ 00:07:45.251 00:07:45.251 real 0m0.215s 00:07:45.251 user 0m0.069s 00:07:45.251 sys 0m0.096s 00:07:45.251 23:51:51 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.251 23:51:51 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:45.509 23:51:51 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:45.509 23:51:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.509 23:51:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.509 23:51:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.509 ************************************ 00:07:45.509 START TEST nvme_reserve 00:07:45.509 ************************************ 00:07:45.509 23:51:51 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:45.509 ===================================================== 00:07:45.509 NVMe Controller at PCI bus 0, device 16, function 0 00:07:45.509 ===================================================== 00:07:45.509 Reservations: Not Supported 00:07:45.509 ===================================================== 00:07:45.509 NVMe Controller at PCI bus 0, device 17, function 0 00:07:45.509 ===================================================== 00:07:45.509 Reservations: Not Supported 00:07:45.509 ===================================================== 00:07:45.509 NVMe Controller at PCI bus 0, device 19, function 0 00:07:45.509 ===================================================== 00:07:45.509 Reservations: Not Supported 00:07:45.509 ===================================================== 00:07:45.509 NVMe Controller at PCI bus 0, device 18, function 0 00:07:45.509 ===================================================== 00:07:45.509 Reservations: Not Supported 00:07:45.509 Reservation test passed 00:07:45.509 00:07:45.509 real 0m0.195s 00:07:45.509 user 0m0.070s 00:07:45.509 sys 0m0.095s 00:07:45.509 ************************************ 00:07:45.509 END TEST nvme_reserve 00:07:45.509 ************************************ 00:07:45.509 23:51:52 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.509 23:51:52 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:45.509 23:51:52 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:45.509 23:51:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.509 23:51:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.509 23:51:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.509 ************************************ 00:07:45.509 START TEST nvme_err_injection 00:07:45.509 ************************************ 00:07:45.509 23:51:52 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:45.767 NVMe Error Injection test 00:07:45.767 Attached to 0000:00:10.0 00:07:45.767 Attached to 0000:00:11.0 00:07:45.767 Attached to 0000:00:13.0 00:07:45.767 Attached to 0000:00:12.0 00:07:45.767 0000:00:10.0: get features failed as expected 00:07:45.767 0000:00:11.0: get features failed as expected 00:07:45.767 0000:00:13.0: get features failed as expected 00:07:45.767 0000:00:12.0: get features failed as expected 00:07:45.767 0000:00:10.0: get features successfully as expected 00:07:45.767 0000:00:11.0: get features successfully as expected 00:07:45.767 0000:00:13.0: get features successfully as expected 00:07:45.767 0000:00:12.0: get features successfully as expected 00:07:45.767 0000:00:10.0: read failed as expected 00:07:45.767 0000:00:11.0: read failed as expected 00:07:45.767 0000:00:13.0: read failed as expected 00:07:45.767 0000:00:12.0: read failed as expected 00:07:45.767 0000:00:10.0: read successfully as expected 00:07:45.767 0000:00:11.0: read successfully as expected 00:07:45.767 0000:00:13.0: read successfully as expected 00:07:45.767 0000:00:12.0: read successfully as expected 00:07:45.767 Cleaning up... 00:07:45.767 ************************************ 00:07:45.767 END TEST nvme_err_injection 00:07:45.767 ************************************ 00:07:45.767 00:07:45.767 real 0m0.218s 00:07:45.767 user 0m0.081s 00:07:45.767 sys 0m0.097s 00:07:45.767 23:51:52 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.767 23:51:52 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:45.767 23:51:52 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:45.767 23:51:52 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:45.767 23:51:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.767 23:51:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.767 ************************************ 00:07:45.767 START TEST nvme_overhead 00:07:45.767 ************************************ 00:07:45.767 23:51:52 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:47.141 Initializing NVMe Controllers 00:07:47.141 Attached to 0000:00:10.0 00:07:47.141 Attached to 0000:00:11.0 00:07:47.141 Attached to 0000:00:13.0 00:07:47.141 Attached to 0000:00:12.0 00:07:47.141 Initialization complete. Launching workers. 00:07:47.141 submit (in ns) avg, min, max = 11333.2, 10230.0, 75172.3 00:07:47.141 complete (in ns) avg, min, max = 7679.1, 7302.3, 59580.8 00:07:47.141 00:07:47.141 Submit histogram 00:07:47.141 ================ 00:07:47.141 Range in us Cumulative Count 00:07:47.141 10.191 - 10.240: 0.0055% ( 1) 00:07:47.141 10.240 - 10.289: 0.0110% ( 1) 00:07:47.141 10.535 - 10.585: 0.0165% ( 1) 00:07:47.141 10.831 - 10.880: 0.1213% ( 19) 00:07:47.141 10.880 - 10.929: 0.8322% ( 129) 00:07:47.141 10.929 - 10.978: 4.0123% ( 577) 00:07:47.141 10.978 - 11.028: 12.9078% ( 1614) 00:07:47.141 11.028 - 11.077: 29.1336% ( 2944) 00:07:47.141 11.077 - 11.126: 47.9167% ( 3408) 00:07:47.141 11.126 - 11.175: 63.6464% ( 2854) 00:07:47.141 11.175 - 11.225: 73.7103% ( 1826) 00:07:47.141 11.225 - 11.274: 79.9989% ( 1141) 00:07:47.141 11.274 - 11.323: 83.5317% ( 641) 00:07:47.141 11.323 - 11.372: 86.0725% ( 461) 00:07:47.141 11.372 - 11.422: 87.6653% ( 289) 00:07:47.141 11.422 - 11.471: 88.7456% ( 196) 00:07:47.141 11.471 - 11.520: 89.5723% ( 150) 00:07:47.141 11.520 - 11.569: 90.3384% ( 139) 00:07:47.141 11.569 - 11.618: 91.1541% ( 148) 00:07:47.141 11.618 - 11.668: 91.9202% ( 139) 00:07:47.141 11.668 - 11.717: 92.5816% ( 120) 00:07:47.141 11.717 - 11.766: 93.2485% ( 121) 00:07:47.141 11.766 - 11.815: 93.7720% ( 95) 00:07:47.141 11.815 - 11.865: 94.2515% ( 87) 00:07:47.141 11.865 - 11.914: 94.6153% ( 66) 00:07:47.141 11.914 - 11.963: 95.0011% ( 70) 00:07:47.141 11.963 - 12.012: 95.3263% ( 59) 00:07:47.141 12.012 - 12.062: 95.6349% ( 56) 00:07:47.141 12.062 - 12.111: 95.8719% ( 43) 00:07:47.141 12.111 - 12.160: 96.1089% ( 43) 00:07:47.141 12.160 - 12.209: 96.2908% ( 33) 00:07:47.141 12.209 - 12.258: 96.4727% ( 33) 00:07:47.141 12.258 - 12.308: 96.6545% ( 33) 00:07:47.141 12.308 - 12.357: 96.7482% ( 17) 00:07:47.141 12.357 - 12.406: 96.8585% ( 20) 00:07:47.141 12.406 - 12.455: 96.9687% ( 20) 00:07:47.141 12.455 - 12.505: 97.0403% ( 13) 00:07:47.141 12.505 - 12.554: 97.0955% ( 10) 00:07:47.141 12.554 - 12.603: 97.1285% ( 6) 00:07:47.141 12.603 - 12.702: 97.1836% ( 10) 00:07:47.141 12.702 - 12.800: 97.2443% ( 11) 00:07:47.141 12.800 - 12.898: 97.2773% ( 6) 00:07:47.141 12.898 - 12.997: 97.3159% ( 7) 00:07:47.141 12.997 - 13.095: 97.4041% ( 16) 00:07:47.141 13.095 - 13.194: 97.5033% ( 18) 00:07:47.141 13.194 - 13.292: 97.5805% ( 14) 00:07:47.141 13.292 - 13.391: 97.6631% ( 15) 00:07:47.141 13.391 - 13.489: 97.7954% ( 24) 00:07:47.141 13.489 - 13.588: 97.8781% ( 15) 00:07:47.141 13.588 - 13.686: 97.9167% ( 7) 00:07:47.141 13.686 - 13.785: 97.9828% ( 12) 00:07:47.141 13.785 - 13.883: 98.0489% ( 12) 00:07:47.141 13.883 - 13.982: 98.1041% ( 10) 00:07:47.141 13.982 - 14.080: 98.1261% ( 4) 00:07:47.141 14.080 - 14.178: 98.1537% ( 5) 00:07:47.141 14.178 - 14.277: 98.1812% ( 5) 00:07:47.141 14.277 - 14.375: 98.1867% ( 1) 00:07:47.141 14.375 - 14.474: 98.2033% ( 3) 00:07:47.141 14.474 - 14.572: 98.2363% ( 6) 00:07:47.141 14.572 - 14.671: 98.2474% ( 2) 00:07:47.141 14.671 - 14.769: 98.2694% ( 4) 00:07:47.141 14.769 - 14.868: 98.2859% ( 3) 00:07:47.141 14.868 - 14.966: 98.3355% ( 9) 00:07:47.141 14.966 - 15.065: 98.3410% ( 1) 00:07:47.141 15.065 - 15.163: 98.3796% ( 7) 00:07:47.141 15.163 - 15.262: 98.3962% ( 3) 00:07:47.141 15.262 - 15.360: 98.4127% ( 3) 00:07:47.141 15.360 - 15.458: 98.4568% ( 8) 00:07:47.141 15.458 - 15.557: 98.4899% ( 6) 00:07:47.141 15.557 - 15.655: 98.5284% ( 7) 00:07:47.141 15.754 - 15.852: 98.5560% ( 5) 00:07:47.141 15.852 - 15.951: 98.5615% ( 1) 00:07:47.141 15.951 - 16.049: 98.5725% ( 2) 00:07:47.141 16.049 - 16.148: 98.5836% ( 2) 00:07:47.141 16.148 - 16.246: 98.5891% ( 1) 00:07:47.141 16.246 - 16.345: 98.5946% ( 1) 00:07:47.141 16.345 - 16.443: 98.6001% ( 1) 00:07:47.141 16.443 - 16.542: 98.6166% ( 3) 00:07:47.141 16.542 - 16.640: 98.6276% ( 2) 00:07:47.141 16.640 - 16.738: 98.7324% ( 19) 00:07:47.141 16.738 - 16.837: 98.8481% ( 21) 00:07:47.141 16.837 - 16.935: 98.9638% ( 21) 00:07:47.141 16.935 - 17.034: 99.0575% ( 17) 00:07:47.141 17.034 - 17.132: 99.1292% ( 13) 00:07:47.141 17.132 - 17.231: 99.2284% ( 18) 00:07:47.141 17.231 - 17.329: 99.3000% ( 13) 00:07:47.141 17.329 - 17.428: 99.3772% ( 14) 00:07:47.141 17.428 - 17.526: 99.4433% ( 12) 00:07:47.141 17.526 - 17.625: 99.4819% ( 7) 00:07:47.141 17.625 - 17.723: 99.5646% ( 15) 00:07:47.141 17.723 - 17.822: 99.6252% ( 11) 00:07:47.141 17.822 - 17.920: 99.6583% ( 6) 00:07:47.141 17.920 - 18.018: 99.6858% ( 5) 00:07:47.141 18.018 - 18.117: 99.7079% ( 4) 00:07:47.141 18.117 - 18.215: 99.7465% ( 7) 00:07:47.141 18.215 - 18.314: 99.7740% ( 5) 00:07:47.141 18.314 - 18.412: 99.7851% ( 2) 00:07:47.141 18.412 - 18.511: 99.7961% ( 2) 00:07:47.141 18.511 - 18.609: 99.8016% ( 1) 00:07:47.141 18.609 - 18.708: 99.8071% ( 1) 00:07:47.141 18.708 - 18.806: 99.8126% ( 1) 00:07:47.141 18.806 - 18.905: 99.8181% ( 1) 00:07:47.141 18.905 - 19.003: 99.8236% ( 1) 00:07:47.142 19.298 - 19.397: 99.8402% ( 3) 00:07:47.142 19.495 - 19.594: 99.8457% ( 1) 00:07:47.142 19.594 - 19.692: 99.8512% ( 1) 00:07:47.142 19.791 - 19.889: 99.8622% ( 2) 00:07:47.142 20.185 - 20.283: 99.8677% ( 1) 00:07:47.142 20.283 - 20.382: 99.8732% ( 1) 00:07:47.142 20.480 - 20.578: 99.8787% ( 1) 00:07:47.142 20.972 - 21.071: 99.8843% ( 1) 00:07:47.142 21.268 - 21.366: 99.8898% ( 1) 00:07:47.142 21.760 - 21.858: 99.9008% ( 2) 00:07:47.142 21.858 - 21.957: 99.9063% ( 1) 00:07:47.142 22.154 - 22.252: 99.9118% ( 1) 00:07:47.142 22.252 - 22.351: 99.9173% ( 1) 00:07:47.142 22.449 - 22.548: 99.9228% ( 1) 00:07:47.142 22.843 - 22.942: 99.9284% ( 1) 00:07:47.142 23.138 - 23.237: 99.9339% ( 1) 00:07:47.142 23.237 - 23.335: 99.9394% ( 1) 00:07:47.142 23.335 - 23.434: 99.9449% ( 1) 00:07:47.142 24.222 - 24.320: 99.9504% ( 1) 00:07:47.142 26.191 - 26.388: 99.9559% ( 1) 00:07:47.142 26.978 - 27.175: 99.9614% ( 1) 00:07:47.142 27.569 - 27.766: 99.9669% ( 1) 00:07:47.142 37.809 - 38.006: 99.9724% ( 1) 00:07:47.142 45.095 - 45.292: 99.9780% ( 1) 00:07:47.142 54.351 - 54.745: 99.9835% ( 1) 00:07:47.142 58.289 - 58.683: 99.9890% ( 1) 00:07:47.142 59.865 - 60.258: 99.9945% ( 1) 00:07:47.142 74.831 - 75.225: 100.0000% ( 1) 00:07:47.142 00:07:47.142 Complete histogram 00:07:47.142 ================== 00:07:47.142 Range in us Cumulative Count 00:07:47.142 7.286 - 7.335: 0.0220% ( 4) 00:07:47.142 7.335 - 7.385: 0.4519% ( 78) 00:07:47.142 7.385 - 7.434: 3.6376% ( 578) 00:07:47.142 7.434 - 7.483: 16.7659% ( 2382) 00:07:47.142 7.483 - 7.532: 42.4989% ( 4669) 00:07:47.142 7.532 - 7.582: 68.3532% ( 4691) 00:07:47.142 7.582 - 7.631: 84.1656% ( 2869) 00:07:47.142 7.631 - 7.680: 91.3139% ( 1297) 00:07:47.142 7.680 - 7.729: 94.6318% ( 602) 00:07:47.142 7.729 - 7.778: 96.2136% ( 287) 00:07:47.142 7.778 - 7.828: 96.9797% ( 139) 00:07:47.142 7.828 - 7.877: 97.3159% ( 61) 00:07:47.142 7.877 - 7.926: 97.5419% ( 41) 00:07:47.142 7.926 - 7.975: 97.5970% ( 10) 00:07:47.142 7.975 - 8.025: 97.6631% ( 12) 00:07:47.142 8.025 - 8.074: 97.7183% ( 10) 00:07:47.142 8.074 - 8.123: 97.7734% ( 10) 00:07:47.142 8.123 - 8.172: 97.7844% ( 2) 00:07:47.142 8.172 - 8.222: 97.8230% ( 7) 00:07:47.142 8.222 - 8.271: 97.8616% ( 7) 00:07:47.142 8.271 - 8.320: 97.9001% ( 7) 00:07:47.142 8.320 - 8.369: 97.9773% ( 14) 00:07:47.142 8.369 - 8.418: 98.0324% ( 10) 00:07:47.142 8.418 - 8.468: 98.0820% ( 9) 00:07:47.142 8.468 - 8.517: 98.1096% ( 5) 00:07:47.142 8.517 - 8.566: 98.1371% ( 5) 00:07:47.142 8.566 - 8.615: 98.1537% ( 3) 00:07:47.142 8.615 - 8.665: 98.1592% ( 1) 00:07:47.142 8.665 - 8.714: 98.1647% ( 1) 00:07:47.142 8.714 - 8.763: 98.1702% ( 1) 00:07:47.142 8.763 - 8.812: 98.1757% ( 1) 00:07:47.142 8.812 - 8.862: 98.1812% ( 1) 00:07:47.142 8.862 - 8.911: 98.1867% ( 1) 00:07:47.142 8.911 - 8.960: 98.1922% ( 1) 00:07:47.142 9.058 - 9.108: 98.1978% ( 1) 00:07:47.142 9.354 - 9.403: 98.2033% ( 1) 00:07:47.142 9.452 - 9.502: 98.2088% ( 1) 00:07:47.142 9.502 - 9.551: 98.2143% ( 1) 00:07:47.142 9.649 - 9.698: 98.2198% ( 1) 00:07:47.142 9.698 - 9.748: 98.2253% ( 1) 00:07:47.142 9.748 - 9.797: 98.2308% ( 1) 00:07:47.142 9.797 - 9.846: 98.2363% ( 1) 00:07:47.142 9.895 - 9.945: 98.2418% ( 1) 00:07:47.142 9.945 - 9.994: 98.2474% ( 1) 00:07:47.142 9.994 - 10.043: 98.2584% ( 2) 00:07:47.142 10.043 - 10.092: 98.2749% ( 3) 00:07:47.142 10.142 - 10.191: 98.2859% ( 2) 00:07:47.142 10.191 - 10.240: 98.3025% ( 3) 00:07:47.142 10.240 - 10.289: 98.3135% ( 2) 00:07:47.142 10.289 - 10.338: 98.3190% ( 1) 00:07:47.142 10.338 - 10.388: 98.3245% ( 1) 00:07:47.142 10.388 - 10.437: 98.3300% ( 1) 00:07:47.142 10.437 - 10.486: 98.3466% ( 3) 00:07:47.142 10.486 - 10.535: 98.3521% ( 1) 00:07:47.142 10.535 - 10.585: 98.3631% ( 2) 00:07:47.142 10.683 - 10.732: 98.3741% ( 2) 00:07:47.142 10.782 - 10.831: 98.3796% ( 1) 00:07:47.142 10.929 - 10.978: 98.3851% ( 1) 00:07:47.142 10.978 - 11.028: 98.3907% ( 1) 00:07:47.142 11.225 - 11.274: 98.3962% ( 1) 00:07:47.142 11.323 - 11.372: 98.4017% ( 1) 00:07:47.142 11.815 - 11.865: 98.4072% ( 1) 00:07:47.142 12.455 - 12.505: 98.4127% ( 1) 00:07:47.142 12.603 - 12.702: 98.4182% ( 1) 00:07:47.142 12.702 - 12.800: 98.4237% ( 1) 00:07:47.142 12.800 - 12.898: 98.4347% ( 2) 00:07:47.142 12.898 - 12.997: 98.4623% ( 5) 00:07:47.142 12.997 - 13.095: 98.5064% ( 8) 00:07:47.142 13.095 - 13.194: 98.6442% ( 25) 00:07:47.142 13.194 - 13.292: 98.7324% ( 16) 00:07:47.142 13.292 - 13.391: 98.8316% ( 18) 00:07:47.142 13.391 - 13.489: 98.8867% ( 10) 00:07:47.142 13.489 - 13.588: 98.9638% ( 14) 00:07:47.142 13.588 - 13.686: 99.0631% ( 18) 00:07:47.142 13.686 - 13.785: 99.1237% ( 11) 00:07:47.142 13.785 - 13.883: 99.2339% ( 20) 00:07:47.142 13.883 - 13.982: 99.3056% ( 13) 00:07:47.142 13.982 - 14.080: 99.3607% ( 10) 00:07:47.142 14.080 - 14.178: 99.4544% ( 17) 00:07:47.142 14.178 - 14.277: 99.4985% ( 8) 00:07:47.142 14.277 - 14.375: 99.5370% ( 7) 00:07:47.142 14.375 - 14.474: 99.5866% ( 9) 00:07:47.142 14.474 - 14.572: 99.6032% ( 3) 00:07:47.142 14.572 - 14.671: 99.6307% ( 5) 00:07:47.142 14.671 - 14.769: 99.6803% ( 9) 00:07:47.142 14.769 - 14.868: 99.7024% ( 4) 00:07:47.142 14.868 - 14.966: 99.7354% ( 6) 00:07:47.142 14.966 - 15.065: 99.7465% ( 2) 00:07:47.142 15.065 - 15.163: 99.7630% ( 3) 00:07:47.142 15.163 - 15.262: 99.7740% ( 2) 00:07:47.142 15.262 - 15.360: 99.7906% ( 3) 00:07:47.142 15.360 - 15.458: 99.7961% ( 1) 00:07:47.142 15.458 - 15.557: 99.8016% ( 1) 00:07:47.142 15.655 - 15.754: 99.8071% ( 1) 00:07:47.142 15.754 - 15.852: 99.8181% ( 2) 00:07:47.142 16.049 - 16.148: 99.8236% ( 1) 00:07:47.142 16.148 - 16.246: 99.8291% ( 1) 00:07:47.142 16.542 - 16.640: 99.8347% ( 1) 00:07:47.142 16.738 - 16.837: 99.8402% ( 1) 00:07:47.142 16.837 - 16.935: 99.8512% ( 2) 00:07:47.142 17.132 - 17.231: 99.8567% ( 1) 00:07:47.142 17.231 - 17.329: 99.8622% ( 1) 00:07:47.142 17.526 - 17.625: 99.8677% ( 1) 00:07:47.142 17.625 - 17.723: 99.8732% ( 1) 00:07:47.142 17.723 - 17.822: 99.8787% ( 1) 00:07:47.142 17.822 - 17.920: 99.8843% ( 1) 00:07:47.142 18.117 - 18.215: 99.8898% ( 1) 00:07:47.142 18.314 - 18.412: 99.8953% ( 1) 00:07:47.142 18.412 - 18.511: 99.9063% ( 2) 00:07:47.142 18.511 - 18.609: 99.9118% ( 1) 00:07:47.142 18.905 - 19.003: 99.9173% ( 1) 00:07:47.142 19.200 - 19.298: 99.9284% ( 2) 00:07:47.142 19.397 - 19.495: 99.9339% ( 1) 00:07:47.142 19.495 - 19.594: 99.9394% ( 1) 00:07:47.142 19.594 - 19.692: 99.9449% ( 1) 00:07:47.142 19.692 - 19.791: 99.9504% ( 1) 00:07:47.142 19.889 - 19.988: 99.9559% ( 1) 00:07:47.142 21.366 - 21.465: 99.9614% ( 1) 00:07:47.142 23.138 - 23.237: 99.9669% ( 1) 00:07:47.142 27.372 - 27.569: 99.9724% ( 1) 00:07:47.142 30.326 - 30.523: 99.9780% ( 1) 00:07:47.142 45.095 - 45.292: 99.9835% ( 1) 00:07:47.142 45.686 - 45.883: 99.9890% ( 1) 00:07:47.142 53.169 - 53.563: 99.9945% ( 1) 00:07:47.142 59.471 - 59.865: 100.0000% ( 1) 00:07:47.142 00:07:47.142 ************************************ 00:07:47.142 END TEST nvme_overhead 00:07:47.142 ************************************ 00:07:47.142 00:07:47.142 real 0m1.217s 00:07:47.142 user 0m1.064s 00:07:47.142 sys 0m0.102s 00:07:47.142 23:51:53 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.142 23:51:53 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:47.142 23:51:53 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:47.142 23:51:53 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:47.142 23:51:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.142 23:51:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.142 ************************************ 00:07:47.142 START TEST nvme_arbitration 00:07:47.142 ************************************ 00:07:47.142 23:51:53 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:50.422 Initializing NVMe Controllers 00:07:50.422 Attached to 0000:00:10.0 00:07:50.422 Attached to 0000:00:11.0 00:07:50.422 Attached to 0000:00:13.0 00:07:50.422 Attached to 0000:00:12.0 00:07:50.422 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:50.422 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:50.422 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:50.422 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:50.422 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:50.422 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:50.422 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:50.422 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:50.422 Initialization complete. Launching workers. 00:07:50.422 Starting thread on core 1 with urgent priority queue 00:07:50.422 Starting thread on core 2 with urgent priority queue 00:07:50.422 Starting thread on core 3 with urgent priority queue 00:07:50.422 Starting thread on core 0 with urgent priority queue 00:07:50.422 QEMU NVMe Ctrl (12340 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:50.422 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:50.422 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:50.423 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:50.423 QEMU NVMe Ctrl (12343 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:07:50.423 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:07:50.423 ======================================================== 00:07:50.423 00:07:50.423 ************************************ 00:07:50.423 END TEST nvme_arbitration 00:07:50.423 ************************************ 00:07:50.423 00:07:50.423 real 0m3.342s 00:07:50.423 user 0m9.320s 00:07:50.423 sys 0m0.123s 00:07:50.423 23:51:57 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.423 23:51:57 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:50.423 23:51:57 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:50.423 23:51:57 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.423 23:51:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.423 23:51:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.423 ************************************ 00:07:50.423 START TEST nvme_single_aen 00:07:50.423 ************************************ 00:07:50.423 23:51:57 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:50.681 Asynchronous Event Request test 00:07:50.681 Attached to 0000:00:10.0 00:07:50.681 Attached to 0000:00:11.0 00:07:50.681 Attached to 0000:00:13.0 00:07:50.681 Attached to 0000:00:12.0 00:07:50.681 Reset controller to setup AER completions for this process 00:07:50.681 Registering asynchronous event callbacks... 00:07:50.681 Getting orig temperature thresholds of all controllers 00:07:50.681 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.681 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.681 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.681 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.681 Setting all controllers temperature threshold low to trigger AER 00:07:50.681 Waiting for all controllers temperature threshold to be set lower 00:07:50.681 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.681 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:50.681 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.681 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:50.681 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.681 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:50.681 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.681 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:50.681 Waiting for all controllers to trigger AER and reset threshold 00:07:50.681 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.681 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.681 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.681 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.681 Cleaning up... 00:07:50.681 ************************************ 00:07:50.681 END TEST nvme_single_aen 00:07:50.681 ************************************ 00:07:50.681 00:07:50.681 real 0m0.202s 00:07:50.681 user 0m0.073s 00:07:50.681 sys 0m0.098s 00:07:50.681 23:51:57 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.681 23:51:57 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:50.681 23:51:57 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:50.681 23:51:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.681 23:51:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.681 23:51:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.681 ************************************ 00:07:50.681 START TEST nvme_doorbell_aers 00:07:50.681 ************************************ 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:50.681 23:51:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:50.942 23:51:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:50.942 23:51:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:50.942 23:51:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:50.942 23:51:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:50.942 [2024-11-18 23:51:57.590320] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:00.952 Executing: test_write_invalid_db 00:08:00.952 Waiting for AER completion... 00:08:00.952 Failure: test_write_invalid_db 00:08:00.952 00:08:00.952 Executing: test_invalid_db_write_overflow_sq 00:08:00.952 Waiting for AER completion... 00:08:00.952 Failure: test_invalid_db_write_overflow_sq 00:08:00.952 00:08:00.952 Executing: test_invalid_db_write_overflow_cq 00:08:00.952 Waiting for AER completion... 00:08:00.952 Failure: test_invalid_db_write_overflow_cq 00:08:00.952 00:08:00.952 23:52:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:00.952 23:52:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:00.952 [2024-11-18 23:52:07.635049] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:10.924 Executing: test_write_invalid_db 00:08:10.924 Waiting for AER completion... 00:08:10.924 Failure: test_write_invalid_db 00:08:10.924 00:08:10.924 Executing: test_invalid_db_write_overflow_sq 00:08:10.924 Waiting for AER completion... 00:08:10.924 Failure: test_invalid_db_write_overflow_sq 00:08:10.924 00:08:10.924 Executing: test_invalid_db_write_overflow_cq 00:08:10.924 Waiting for AER completion... 00:08:10.924 Failure: test_invalid_db_write_overflow_cq 00:08:10.924 00:08:10.924 23:52:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:10.924 23:52:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:11.186 [2024-11-18 23:52:17.645942] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:21.188 Executing: test_write_invalid_db 00:08:21.188 Waiting for AER completion... 00:08:21.188 Failure: test_write_invalid_db 00:08:21.188 00:08:21.188 Executing: test_invalid_db_write_overflow_sq 00:08:21.188 Waiting for AER completion... 00:08:21.188 Failure: test_invalid_db_write_overflow_sq 00:08:21.188 00:08:21.188 Executing: test_invalid_db_write_overflow_cq 00:08:21.188 Waiting for AER completion... 00:08:21.188 Failure: test_invalid_db_write_overflow_cq 00:08:21.188 00:08:21.188 23:52:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:21.188 23:52:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:21.188 [2024-11-18 23:52:27.683096] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 Executing: test_write_invalid_db 00:08:31.174 Waiting for AER completion... 00:08:31.174 Failure: test_write_invalid_db 00:08:31.174 00:08:31.174 Executing: test_invalid_db_write_overflow_sq 00:08:31.174 Waiting for AER completion... 00:08:31.174 Failure: test_invalid_db_write_overflow_sq 00:08:31.174 00:08:31.174 Executing: test_invalid_db_write_overflow_cq 00:08:31.174 Waiting for AER completion... 00:08:31.174 Failure: test_invalid_db_write_overflow_cq 00:08:31.174 00:08:31.174 00:08:31.174 real 0m40.187s 00:08:31.174 user 0m34.157s 00:08:31.174 sys 0m5.620s 00:08:31.174 23:52:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.174 ************************************ 00:08:31.174 END TEST nvme_doorbell_aers 00:08:31.174 ************************************ 00:08:31.174 23:52:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 23:52:37 nvme -- nvme/nvme.sh@97 -- # uname 00:08:31.174 23:52:37 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:31.174 23:52:37 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:31.174 23:52:37 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:31.174 23:52:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.174 23:52:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 ************************************ 00:08:31.174 START TEST nvme_multi_aen 00:08:31.174 ************************************ 00:08:31.174 23:52:37 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:31.174 [2024-11-18 23:52:37.710236] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.710422] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.710437] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.711522] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.711551] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.711560] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.712368] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.712389] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.712397] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.713150] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.713170] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 [2024-11-18 23:52:37.713178] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63197) is not found. Dropping the request. 00:08:31.174 Child process pid: 63718 00:08:31.433 [Child] Asynchronous Event Request test 00:08:31.433 [Child] Attached to 0000:00:10.0 00:08:31.433 [Child] Attached to 0000:00:11.0 00:08:31.433 [Child] Attached to 0000:00:13.0 00:08:31.433 [Child] Attached to 0000:00:12.0 00:08:31.433 [Child] Registering asynchronous event callbacks... 00:08:31.433 [Child] Getting orig temperature thresholds of all controllers 00:08:31.433 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.433 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.433 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.433 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.433 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:31.433 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.433 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.433 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.433 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.433 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.433 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.433 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.433 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.433 [Child] Cleaning up... 00:08:31.433 Asynchronous Event Request test 00:08:31.433 Attached to 0000:00:10.0 00:08:31.433 Attached to 0000:00:11.0 00:08:31.433 Attached to 0000:00:13.0 00:08:31.433 Attached to 0000:00:12.0 00:08:31.433 Reset controller to setup AER completions for this process 00:08:31.433 Registering asynchronous event callbacks... 00:08:31.433 Getting orig temperature thresholds of all controllers 00:08:31.433 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.433 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.433 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.433 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.433 Setting all controllers temperature threshold low to trigger AER 00:08:31.433 Waiting for all controllers temperature threshold to be set lower 00:08:31.433 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.433 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:31.433 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.433 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:31.433 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.433 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:31.433 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.433 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:31.433 Waiting for all controllers to trigger AER and reset threshold 00:08:31.433 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.433 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.433 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.433 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.433 Cleaning up... 00:08:31.433 00:08:31.433 real 0m0.411s 00:08:31.433 user 0m0.137s 00:08:31.433 sys 0m0.173s 00:08:31.433 23:52:37 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.433 23:52:37 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:31.433 ************************************ 00:08:31.433 END TEST nvme_multi_aen 00:08:31.433 ************************************ 00:08:31.433 23:52:37 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:31.433 23:52:37 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:31.433 23:52:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.433 23:52:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.433 ************************************ 00:08:31.433 START TEST nvme_startup 00:08:31.433 ************************************ 00:08:31.433 23:52:37 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:31.692 Initializing NVMe Controllers 00:08:31.692 Attached to 0000:00:10.0 00:08:31.692 Attached to 0000:00:11.0 00:08:31.692 Attached to 0000:00:13.0 00:08:31.692 Attached to 0000:00:12.0 00:08:31.692 Initialization complete. 00:08:31.692 Time used:131441.969 (us). 00:08:31.692 00:08:31.692 real 0m0.188s 00:08:31.692 user 0m0.061s 00:08:31.692 sys 0m0.086s 00:08:31.692 23:52:38 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.692 23:52:38 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 ************************************ 00:08:31.692 END TEST nvme_startup 00:08:31.692 ************************************ 00:08:31.692 23:52:38 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:31.692 23:52:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.692 23:52:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.692 23:52:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 ************************************ 00:08:31.692 START TEST nvme_multi_secondary 00:08:31.692 ************************************ 00:08:31.692 23:52:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:31.692 23:52:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63774 00:08:31.692 23:52:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63775 00:08:31.692 23:52:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:31.692 23:52:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:31.692 23:52:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:35.006 Initializing NVMe Controllers 00:08:35.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:35.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:35.006 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:35.006 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:35.006 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:35.006 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:35.006 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:35.006 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:35.006 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:35.006 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:35.006 Initialization complete. Launching workers. 00:08:35.006 ======================================================== 00:08:35.006 Latency(us) 00:08:35.006 Device Information : IOPS MiB/s Average min max 00:08:35.006 PCIE (0000:00:10.0) NSID 1 from core 1: 7987.74 31.20 2001.73 717.90 5878.18 00:08:35.006 PCIE (0000:00:11.0) NSID 1 from core 1: 7987.74 31.20 2002.66 737.27 5802.48 00:08:35.006 PCIE (0000:00:13.0) NSID 1 from core 1: 7987.74 31.20 2002.62 742.09 6028.45 00:08:35.006 PCIE (0000:00:12.0) NSID 1 from core 1: 7987.74 31.20 2002.61 721.24 5969.30 00:08:35.006 PCIE (0000:00:12.0) NSID 2 from core 1: 7987.74 31.20 2002.68 743.17 5892.91 00:08:35.006 PCIE (0000:00:12.0) NSID 3 from core 1: 7987.74 31.20 2002.65 738.51 5557.44 00:08:35.006 ======================================================== 00:08:35.006 Total : 47926.45 187.21 2002.49 717.90 6028.45 00:08:35.006 00:08:35.006 Initializing NVMe Controllers 00:08:35.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:35.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:35.006 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:35.006 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:35.006 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:35.006 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:35.006 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:35.006 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:35.006 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:35.006 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:35.006 Initialization complete. Launching workers. 00:08:35.006 ======================================================== 00:08:35.006 Latency(us) 00:08:35.006 Device Information : IOPS MiB/s Average min max 00:08:35.006 PCIE (0000:00:10.0) NSID 1 from core 2: 3348.32 13.08 4777.07 780.39 12822.35 00:08:35.006 PCIE (0000:00:11.0) NSID 1 from core 2: 3348.32 13.08 4778.15 807.68 12947.23 00:08:35.006 PCIE (0000:00:13.0) NSID 1 from core 2: 3348.32 13.08 4784.91 790.94 14110.29 00:08:35.006 PCIE (0000:00:12.0) NSID 1 from core 2: 3348.32 13.08 4784.79 807.93 13608.45 00:08:35.006 PCIE (0000:00:12.0) NSID 2 from core 2: 3348.32 13.08 4783.87 807.61 12731.88 00:08:35.006 PCIE (0000:00:12.0) NSID 3 from core 2: 3348.32 13.08 4784.57 822.75 12167.13 00:08:35.006 ======================================================== 00:08:35.006 Total : 20089.93 78.48 4782.23 780.39 14110.29 00:08:35.006 00:08:35.006 23:52:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63774 00:08:36.942 Initializing NVMe Controllers 00:08:36.942 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.942 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.942 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.942 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.942 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:36.942 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:36.942 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:36.942 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:36.942 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:36.942 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:36.942 Initialization complete. Launching workers. 00:08:36.942 ======================================================== 00:08:36.942 Latency(us) 00:08:36.942 Device Information : IOPS MiB/s Average min max 00:08:36.942 PCIE (0000:00:10.0) NSID 1 from core 0: 11451.71 44.73 1396.01 682.84 5616.09 00:08:36.942 PCIE (0000:00:11.0) NSID 1 from core 0: 11451.71 44.73 1396.80 690.75 5757.90 00:08:36.942 PCIE (0000:00:13.0) NSID 1 from core 0: 11451.71 44.73 1396.78 652.80 6196.20 00:08:36.942 PCIE (0000:00:12.0) NSID 1 from core 0: 11451.71 44.73 1396.77 641.92 6130.24 00:08:36.942 PCIE (0000:00:12.0) NSID 2 from core 0: 11451.71 44.73 1396.76 613.52 6894.09 00:08:36.942 PCIE (0000:00:12.0) NSID 3 from core 0: 11451.71 44.73 1396.75 590.93 6091.12 00:08:36.942 ======================================================== 00:08:36.942 Total : 68710.25 268.40 1396.65 590.93 6894.09 00:08:36.942 00:08:36.942 23:52:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63775 00:08:36.942 23:52:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63844 00:08:36.942 23:52:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:36.942 23:52:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63845 00:08:36.942 23:52:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:36.942 23:52:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:40.224 Initializing NVMe Controllers 00:08:40.224 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.224 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.224 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.224 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.224 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:40.224 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:40.224 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:40.224 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:40.224 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:40.224 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:40.224 Initialization complete. Launching workers. 00:08:40.224 ======================================================== 00:08:40.224 Latency(us) 00:08:40.224 Device Information : IOPS MiB/s Average min max 00:08:40.224 PCIE (0000:00:10.0) NSID 1 from core 0: 7843.48 30.64 2038.55 708.99 6552.35 00:08:40.224 PCIE (0000:00:11.0) NSID 1 from core 0: 7843.48 30.64 2039.51 720.85 6516.65 00:08:40.224 PCIE (0000:00:13.0) NSID 1 from core 0: 7843.48 30.64 2039.53 724.77 6129.11 00:08:40.224 PCIE (0000:00:12.0) NSID 1 from core 0: 7843.48 30.64 2039.56 729.68 6173.72 00:08:40.224 PCIE (0000:00:12.0) NSID 2 from core 0: 7843.48 30.64 2039.60 733.17 6146.03 00:08:40.224 PCIE (0000:00:12.0) NSID 3 from core 0: 7843.48 30.64 2039.65 735.38 6595.48 00:08:40.224 ======================================================== 00:08:40.224 Total : 47060.86 183.83 2039.40 708.99 6595.48 00:08:40.224 00:08:40.483 Initializing NVMe Controllers 00:08:40.483 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.483 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.483 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.483 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.483 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:40.483 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:40.483 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:40.483 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:40.483 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:40.483 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:40.483 Initialization complete. Launching workers. 00:08:40.483 ======================================================== 00:08:40.483 Latency(us) 00:08:40.483 Device Information : IOPS MiB/s Average min max 00:08:40.483 PCIE (0000:00:10.0) NSID 1 from core 1: 7866.81 30.73 2032.52 690.68 5916.43 00:08:40.483 PCIE (0000:00:11.0) NSID 1 from core 1: 7866.81 30.73 2033.42 708.33 6025.45 00:08:40.483 PCIE (0000:00:13.0) NSID 1 from core 1: 7866.81 30.73 2033.37 705.52 5513.88 00:08:40.483 PCIE (0000:00:12.0) NSID 1 from core 1: 7866.81 30.73 2033.31 713.08 5419.46 00:08:40.483 PCIE (0000:00:12.0) NSID 2 from core 1: 7866.81 30.73 2033.25 713.90 5313.02 00:08:40.483 PCIE (0000:00:12.0) NSID 3 from core 1: 7866.81 30.73 2033.20 711.95 5617.04 00:08:40.483 ======================================================== 00:08:40.483 Total : 47200.88 184.38 2033.18 690.68 6025.45 00:08:40.483 00:08:42.382 Initializing NVMe Controllers 00:08:42.382 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:42.382 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:42.382 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:42.382 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:42.382 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:42.382 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:42.382 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:42.382 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:42.382 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:42.382 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:42.382 Initialization complete. Launching workers. 00:08:42.382 ======================================================== 00:08:42.382 Latency(us) 00:08:42.382 Device Information : IOPS MiB/s Average min max 00:08:42.382 PCIE (0000:00:10.0) NSID 1 from core 2: 4580.38 17.89 3491.32 732.81 13946.93 00:08:42.382 PCIE (0000:00:11.0) NSID 1 from core 2: 4580.38 17.89 3492.39 708.51 13796.68 00:08:42.382 PCIE (0000:00:13.0) NSID 1 from core 2: 4580.38 17.89 3492.33 756.49 12893.54 00:08:42.382 PCIE (0000:00:12.0) NSID 1 from core 2: 4580.38 17.89 3492.10 757.32 12853.83 00:08:42.382 PCIE (0000:00:12.0) NSID 2 from core 2: 4580.38 17.89 3492.58 747.18 14288.34 00:08:42.382 PCIE (0000:00:12.0) NSID 3 from core 2: 4580.38 17.89 3491.65 590.29 13370.13 00:08:42.382 ======================================================== 00:08:42.382 Total : 27482.26 107.35 3492.06 590.29 14288.34 00:08:42.382 00:08:42.382 ************************************ 00:08:42.382 END TEST nvme_multi_secondary 00:08:42.382 ************************************ 00:08:42.382 23:52:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63844 00:08:42.382 23:52:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63845 00:08:42.382 00:08:42.382 real 0m10.699s 00:08:42.382 user 0m18.413s 00:08:42.382 sys 0m0.635s 00:08:42.382 23:52:48 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.382 23:52:48 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:42.382 23:52:48 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:42.382 23:52:48 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:42.382 23:52:48 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62801 ]] 00:08:42.382 23:52:48 nvme -- common/autotest_common.sh@1094 -- # kill 62801 00:08:42.382 23:52:48 nvme -- common/autotest_common.sh@1095 -- # wait 62801 00:08:42.382 [2024-11-18 23:52:48.957289] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.957358] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.957384] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.957400] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.960238] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.960307] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.960327] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.960345] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.962763] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.962819] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.962831] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.962842] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.964992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.965044] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.965056] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:48.965066] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:08:42.383 [2024-11-18 23:52:49.069647] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:42.642 23:52:49 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:42.642 23:52:49 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:42.642 23:52:49 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:42.642 23:52:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.642 23:52:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.642 23:52:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.642 ************************************ 00:08:42.642 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:42.642 ************************************ 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:42.642 * Looking for test storage... 00:08:42.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.642 --rc genhtml_branch_coverage=1 00:08:42.642 --rc genhtml_function_coverage=1 00:08:42.642 --rc genhtml_legend=1 00:08:42.642 --rc geninfo_all_blocks=1 00:08:42.642 --rc geninfo_unexecuted_blocks=1 00:08:42.642 00:08:42.642 ' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.642 --rc genhtml_branch_coverage=1 00:08:42.642 --rc genhtml_function_coverage=1 00:08:42.642 --rc genhtml_legend=1 00:08:42.642 --rc geninfo_all_blocks=1 00:08:42.642 --rc geninfo_unexecuted_blocks=1 00:08:42.642 00:08:42.642 ' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.642 --rc genhtml_branch_coverage=1 00:08:42.642 --rc genhtml_function_coverage=1 00:08:42.642 --rc genhtml_legend=1 00:08:42.642 --rc geninfo_all_blocks=1 00:08:42.642 --rc geninfo_unexecuted_blocks=1 00:08:42.642 00:08:42.642 ' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.642 --rc genhtml_branch_coverage=1 00:08:42.642 --rc genhtml_function_coverage=1 00:08:42.642 --rc genhtml_legend=1 00:08:42.642 --rc geninfo_all_blocks=1 00:08:42.642 --rc geninfo_unexecuted_blocks=1 00:08:42.642 00:08:42.642 ' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:42.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64004 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64004 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64004 ']' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.642 23:52:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.901 [2024-11-18 23:52:49.366220] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:42.901 [2024-11-18 23:52:49.366307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64004 ] 00:08:42.901 [2024-11-18 23:52:49.527627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.158 [2024-11-18 23:52:49.629828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.158 [2024-11-18 23:52:49.630156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.158 [2024-11-18 23:52:49.630318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.159 [2024-11-18 23:52:49.630485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.725 nvme0n1 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_pdz8m.txt 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.725 true 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731973970 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64027 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:43.725 23:52:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:45.632 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:45.632 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.632 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:45.632 [2024-11-18 23:52:52.306788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:45.632 [2024-11-18 23:52:52.307162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:45.632 [2024-11-18 23:52:52.307192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:45.632 [2024-11-18 23:52:52.307206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.632 [2024-11-18 23:52:52.308784] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:45.632 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64027 00:08:45.632 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.632 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64027 00:08:45.632 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64027 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_pdz8m.txt 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_pdz8m.txt 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64004 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64004 ']' 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64004 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64004 00:08:45.893 killing process with pid 64004 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64004' 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64004 00:08:45.893 23:52:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64004 00:08:47.279 23:52:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:47.279 23:52:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:47.279 ************************************ 00:08:47.279 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:47.279 ************************************ 00:08:47.279 00:08:47.279 real 0m4.595s 00:08:47.279 user 0m16.352s 00:08:47.279 sys 0m0.483s 00:08:47.279 23:52:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.279 23:52:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:47.279 23:52:53 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:47.279 23:52:53 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:47.279 23:52:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.279 23:52:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.279 23:52:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.279 ************************************ 00:08:47.279 START TEST nvme_fio 00:08:47.279 ************************************ 00:08:47.279 23:52:53 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:47.279 23:52:53 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:47.279 23:52:53 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:47.279 23:52:53 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:47.279 23:52:53 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:47.279 23:52:53 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:47.279 23:52:53 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:47.280 23:52:53 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:47.280 23:52:53 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:47.280 23:52:53 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:47.280 23:52:53 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:47.280 23:52:53 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:47.280 23:52:53 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:47.280 23:52:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:47.280 23:52:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:47.280 23:52:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:47.541 23:52:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:47.541 23:52:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:47.802 23:52:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:47.802 23:52:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:47.802 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:47.802 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:47.802 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:47.802 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:47.802 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.802 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:47.802 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:47.802 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:47.802 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:47.803 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.803 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:47.803 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:47.803 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:47.803 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:47.803 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:47.803 23:52:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:47.803 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:47.803 fio-3.35 00:08:47.803 Starting 1 thread 00:08:54.387 00:08:54.387 test: (groupid=0, jobs=1): err= 0: pid=64169: Mon Nov 18 23:53:00 2024 00:08:54.387 read: IOPS=24.4k, BW=95.1MiB/s (99.8MB/s)(190MiB/2001msec) 00:08:54.387 slat (usec): min=3, max=133, avg= 4.89, stdev= 2.11 00:08:54.387 clat (usec): min=558, max=12592, avg=2620.30, stdev=878.58 00:08:54.387 lat (usec): min=570, max=12635, avg=2625.19, stdev=879.75 00:08:54.387 clat percentiles (usec): 00:08:54.387 | 1.00th=[ 1418], 5.00th=[ 1958], 10.00th=[ 2114], 20.00th=[ 2278], 00:08:54.387 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:08:54.387 | 70.00th=[ 2474], 80.00th=[ 2606], 90.00th=[ 3458], 95.00th=[ 4883], 00:08:54.387 | 99.00th=[ 6128], 99.50th=[ 6849], 99.90th=[ 8586], 99.95th=[ 9372], 00:08:54.387 | 99.99th=[12256] 00:08:54.387 bw ( KiB/s): min=84768, max=106040, per=99.26%, avg=96701.33, stdev=10870.77, samples=3 00:08:54.387 iops : min=21192, max=26510, avg=24175.33, stdev=2717.69, samples=3 00:08:54.387 write: IOPS=24.2k, BW=94.5MiB/s (99.1MB/s)(189MiB/2001msec); 0 zone resets 00:08:54.387 slat (usec): min=3, max=111, avg= 5.15, stdev= 2.14 00:08:54.387 clat (usec): min=602, max=12370, avg=2630.38, stdev=890.63 00:08:54.387 lat (usec): min=607, max=12389, avg=2635.53, stdev=891.81 00:08:54.388 clat percentiles (usec): 00:08:54.388 | 1.00th=[ 1385], 5.00th=[ 1958], 10.00th=[ 2114], 20.00th=[ 2278], 00:08:54.388 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:08:54.388 | 70.00th=[ 2474], 80.00th=[ 2606], 90.00th=[ 3458], 95.00th=[ 4948], 00:08:54.388 | 99.00th=[ 6259], 99.50th=[ 6915], 99.90th=[ 8356], 99.95th=[ 9372], 00:08:54.388 | 99.99th=[11863] 00:08:54.388 bw ( KiB/s): min=84592, max=105328, per=100.00%, avg=96840.00, stdev=10867.32, samples=3 00:08:54.388 iops : min=21148, max=26332, avg=24210.00, stdev=2716.83, samples=3 00:08:54.388 lat (usec) : 750=0.02%, 1000=0.15% 00:08:54.388 lat (msec) : 2=5.66%, 4=86.26%, 10=7.87%, 20=0.04% 00:08:54.388 cpu : usr=99.25%, sys=0.10%, ctx=3, majf=0, minf=607 00:08:54.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:54.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.388 issued rwts: total=48737,48428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.388 00:08:54.388 Run status group 0 (all jobs): 00:08:54.388 READ: bw=95.1MiB/s (99.8MB/s), 95.1MiB/s-95.1MiB/s (99.8MB/s-99.8MB/s), io=190MiB (200MB), run=2001-2001msec 00:08:54.388 WRITE: bw=94.5MiB/s (99.1MB/s), 94.5MiB/s-94.5MiB/s (99.1MB/s-99.1MB/s), io=189MiB (198MB), run=2001-2001msec 00:08:54.388 ----------------------------------------------------- 00:08:54.388 Suppressions used: 00:08:54.388 count bytes template 00:08:54.388 1 32 /usr/src/fio/parse.c 00:08:54.388 1 8 libtcmalloc_minimal.so 00:08:54.388 ----------------------------------------------------- 00:08:54.388 00:08:54.388 23:53:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:54.388 23:53:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:54.388 23:53:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:54.388 23:53:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:54.388 23:53:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:54.388 23:53:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:54.649 23:53:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:54.649 23:53:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:54.649 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:54.911 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:54.911 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:54.911 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:54.911 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:54.911 23:53:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:54.911 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:54.911 fio-3.35 00:08:54.911 Starting 1 thread 00:09:00.218 00:09:00.218 test: (groupid=0, jobs=1): err= 0: pid=64234: Mon Nov 18 23:53:06 2024 00:09:00.218 read: IOPS=17.5k, BW=68.3MiB/s (71.6MB/s)(137MiB/2001msec) 00:09:00.218 slat (nsec): min=3700, max=98107, avg=6087.09, stdev=3487.50 00:09:00.218 clat (usec): min=226, max=11716, avg=3619.44, stdev=1343.76 00:09:00.218 lat (usec): min=230, max=11735, avg=3625.53, stdev=1345.20 00:09:00.218 clat percentiles (usec): 00:09:00.218 | 1.00th=[ 2008], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2606], 00:09:00.218 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 3064], 60.00th=[ 3359], 00:09:00.218 | 70.00th=[ 3949], 80.00th=[ 4752], 90.00th=[ 5669], 95.00th=[ 6390], 00:09:00.218 | 99.00th=[ 7767], 99.50th=[ 8455], 99.90th=[ 9110], 99.95th=[ 9634], 00:09:00.218 | 99.99th=[10945] 00:09:00.218 bw ( KiB/s): min=64942, max=73264, per=100.00%, avg=69951.33, stdev=4412.81, samples=3 00:09:00.218 iops : min=16235, max=18316, avg=17487.67, stdev=1103.49, samples=3 00:09:00.218 write: IOPS=17.5k, BW=68.3MiB/s (71.7MB/s)(137MiB/2001msec); 0 zone resets 00:09:00.218 slat (nsec): min=3906, max=82654, avg=6219.31, stdev=3453.91 00:09:00.218 clat (usec): min=210, max=11543, avg=3668.28, stdev=1354.29 00:09:00.218 lat (usec): min=214, max=11550, avg=3674.49, stdev=1355.75 00:09:00.218 clat percentiles (usec): 00:09:00.218 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2638], 00:09:00.218 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3097], 60.00th=[ 3425], 00:09:00.218 | 70.00th=[ 4015], 80.00th=[ 4752], 90.00th=[ 5735], 95.00th=[ 6456], 00:09:00.218 | 99.00th=[ 7832], 99.50th=[ 8586], 99.90th=[ 9503], 99.95th=[ 9896], 00:09:00.218 | 99.99th=[11076] 00:09:00.218 bw ( KiB/s): min=65365, max=72800, per=99.97%, avg=69969.67, stdev=4022.56, samples=3 00:09:00.218 iops : min=16341, max=18200, avg=17492.33, stdev=1005.78, samples=3 00:09:00.218 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:09:00.218 lat (msec) : 2=0.83%, 4=69.32%, 10=29.76%, 20=0.04% 00:09:00.218 cpu : usr=98.70%, sys=0.05%, ctx=13, majf=0, minf=607 00:09:00.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:00.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.218 issued rwts: total=34982,35012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.218 00:09:00.218 Run status group 0 (all jobs): 00:09:00.218 READ: bw=68.3MiB/s (71.6MB/s), 68.3MiB/s-68.3MiB/s (71.6MB/s-71.6MB/s), io=137MiB (143MB), run=2001-2001msec 00:09:00.218 WRITE: bw=68.3MiB/s (71.7MB/s), 68.3MiB/s-68.3MiB/s (71.7MB/s-71.7MB/s), io=137MiB (143MB), run=2001-2001msec 00:09:00.218 ----------------------------------------------------- 00:09:00.218 Suppressions used: 00:09:00.218 count bytes template 00:09:00.219 1 32 /usr/src/fio/parse.c 00:09:00.219 1 8 libtcmalloc_minimal.so 00:09:00.219 ----------------------------------------------------- 00:09:00.219 00:09:00.219 23:53:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:00.219 23:53:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:00.219 23:53:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:00.219 23:53:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.477 23:53:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:00.477 23:53:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.477 23:53:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.477 23:53:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.477 23:53:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:00.735 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.735 fio-3.35 00:09:00.735 Starting 1 thread 00:09:07.293 00:09:07.293 test: (groupid=0, jobs=1): err= 0: pid=64293: Mon Nov 18 23:53:13 2024 00:09:07.293 read: IOPS=19.1k, BW=74.7MiB/s (78.4MB/s)(150MiB/2001msec) 00:09:07.293 slat (usec): min=4, max=451, avg= 5.51, stdev= 3.98 00:09:07.293 clat (usec): min=200, max=11072, avg=3312.38, stdev=1228.26 00:09:07.293 lat (usec): min=204, max=11079, avg=3317.90, stdev=1229.59 00:09:07.293 clat percentiles (usec): 00:09:07.293 | 1.00th=[ 1926], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:07.293 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2802], 60.00th=[ 3064], 00:09:07.293 | 70.00th=[ 3490], 80.00th=[ 4228], 90.00th=[ 5211], 95.00th=[ 5932], 00:09:07.293 | 99.00th=[ 7111], 99.50th=[ 7898], 99.90th=[ 8717], 99.95th=[ 9503], 00:09:07.293 | 99.99th=[10814] 00:09:07.293 bw ( KiB/s): min=70520, max=87056, per=99.82%, avg=76402.67, stdev=9242.80, samples=3 00:09:07.293 iops : min=17630, max=21764, avg=19100.67, stdev=2310.70, samples=3 00:09:07.293 write: IOPS=19.1k, BW=74.7MiB/s (78.3MB/s)(149MiB/2001msec); 0 zone resets 00:09:07.293 slat (usec): min=4, max=159, avg= 5.59, stdev= 3.02 00:09:07.293 clat (usec): min=295, max=11167, avg=3356.40, stdev=1241.47 00:09:07.293 lat (usec): min=300, max=11187, avg=3361.98, stdev=1242.73 00:09:07.293 clat percentiles (usec): 00:09:07.293 | 1.00th=[ 1958], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:07.293 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 3097], 00:09:07.293 | 70.00th=[ 3556], 80.00th=[ 4293], 90.00th=[ 5276], 95.00th=[ 5932], 00:09:07.293 | 99.00th=[ 7242], 99.50th=[ 8029], 99.90th=[ 8717], 99.95th=[ 8979], 00:09:07.293 | 99.99th=[10683] 00:09:07.293 bw ( KiB/s): min=70664, max=86736, per=100.00%, avg=76565.33, stdev=8845.77, samples=3 00:09:07.293 iops : min=17666, max=21684, avg=19141.33, stdev=2211.44, samples=3 00:09:07.293 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:07.293 lat (msec) : 2=1.23%, 4=75.12%, 10=23.58%, 20=0.02% 00:09:07.293 cpu : usr=98.30%, sys=0.40%, ctx=5, majf=0, minf=607 00:09:07.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:07.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.293 issued rwts: total=38289,38247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.293 00:09:07.293 Run status group 0 (all jobs): 00:09:07.293 READ: bw=74.7MiB/s (78.4MB/s), 74.7MiB/s-74.7MiB/s (78.4MB/s-78.4MB/s), io=150MiB (157MB), run=2001-2001msec 00:09:07.293 WRITE: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=149MiB (157MB), run=2001-2001msec 00:09:07.293 ----------------------------------------------------- 00:09:07.293 Suppressions used: 00:09:07.293 count bytes template 00:09:07.293 1 32 /usr/src/fio/parse.c 00:09:07.293 1 8 libtcmalloc_minimal.so 00:09:07.293 ----------------------------------------------------- 00:09:07.293 00:09:07.293 23:53:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:07.293 23:53:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:07.293 23:53:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:07.293 23:53:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:07.293 23:53:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:07.293 23:53:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:07.552 23:53:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:07.552 23:53:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:07.552 23:53:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:07.811 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:07.811 fio-3.35 00:09:07.811 Starting 1 thread 00:09:15.951 00:09:15.951 test: (groupid=0, jobs=1): err= 0: pid=64361: Mon Nov 18 23:53:21 2024 00:09:15.951 read: IOPS=17.9k, BW=70.0MiB/s (73.4MB/s)(140MiB/2001msec) 00:09:15.951 slat (nsec): min=4198, max=74397, avg=5626.79, stdev=2969.10 00:09:15.951 clat (usec): min=734, max=9102, avg=3543.10, stdev=1285.52 00:09:15.951 lat (usec): min=739, max=9113, avg=3548.73, stdev=1286.81 00:09:15.951 clat percentiles (usec): 00:09:15.951 | 1.00th=[ 1942], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:15.951 | 30.00th=[ 2638], 40.00th=[ 2802], 50.00th=[ 3064], 60.00th=[ 3425], 00:09:15.951 | 70.00th=[ 4047], 80.00th=[ 4752], 90.00th=[ 5473], 95.00th=[ 6063], 00:09:15.951 | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8356], 99.95th=[ 8586], 00:09:15.951 | 99.99th=[ 8979] 00:09:15.951 bw ( KiB/s): min=62944, max=76256, per=99.82%, avg=71560.00, stdev=7471.76, samples=3 00:09:15.951 iops : min=15736, max=19064, avg=17890.00, stdev=1867.94, samples=3 00:09:15.951 write: IOPS=17.9k, BW=70.1MiB/s (73.5MB/s)(140MiB/2001msec); 0 zone resets 00:09:15.951 slat (nsec): min=4266, max=71207, avg=5729.74, stdev=3156.57 00:09:15.951 clat (usec): min=659, max=9195, avg=3570.25, stdev=1283.82 00:09:15.951 lat (usec): min=665, max=9211, avg=3575.98, stdev=1285.18 00:09:15.951 clat percentiles (usec): 00:09:15.951 | 1.00th=[ 1958], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:15.951 | 30.00th=[ 2671], 40.00th=[ 2835], 50.00th=[ 3097], 60.00th=[ 3425], 00:09:15.951 | 70.00th=[ 4047], 80.00th=[ 4752], 90.00th=[ 5473], 95.00th=[ 6063], 00:09:15.951 | 99.00th=[ 7373], 99.50th=[ 7832], 99.90th=[ 8455], 99.95th=[ 8717], 00:09:15.951 | 99.99th=[ 9110] 00:09:15.951 bw ( KiB/s): min=63312, max=75800, per=99.67%, avg=71498.67, stdev=7092.91, samples=3 00:09:15.951 iops : min=15828, max=18950, avg=17874.67, stdev=1773.23, samples=3 00:09:15.951 lat (usec) : 750=0.01%, 1000=0.01% 00:09:15.951 lat (msec) : 2=1.36%, 4=67.98%, 10=30.65% 00:09:15.951 cpu : usr=98.80%, sys=0.10%, ctx=14, majf=0, minf=605 00:09:15.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:15.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.951 issued rwts: total=35864,35885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.951 00:09:15.951 Run status group 0 (all jobs): 00:09:15.951 READ: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=140MiB (147MB), run=2001-2001msec 00:09:15.951 WRITE: bw=70.1MiB/s (73.5MB/s), 70.1MiB/s-70.1MiB/s (73.5MB/s-73.5MB/s), io=140MiB (147MB), run=2001-2001msec 00:09:15.951 ----------------------------------------------------- 00:09:15.951 Suppressions used: 00:09:15.951 count bytes template 00:09:15.951 1 32 /usr/src/fio/parse.c 00:09:15.951 1 8 libtcmalloc_minimal.so 00:09:15.951 ----------------------------------------------------- 00:09:15.951 00:09:15.951 ************************************ 00:09:15.951 END TEST nvme_fio 00:09:15.951 ************************************ 00:09:15.951 23:53:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:15.951 23:53:21 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:15.951 00:09:15.951 real 0m27.803s 00:09:15.951 user 0m21.176s 00:09:15.951 sys 0m9.537s 00:09:15.951 23:53:21 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.951 23:53:21 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:15.951 ************************************ 00:09:15.951 END TEST nvme 00:09:15.951 ************************************ 00:09:15.951 00:09:15.951 real 1m36.731s 00:09:15.951 user 3m41.420s 00:09:15.951 sys 0m19.958s 00:09:15.951 23:53:21 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.951 23:53:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.951 23:53:21 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:15.951 23:53:21 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:15.951 23:53:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.951 23:53:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.951 23:53:21 -- common/autotest_common.sh@10 -- # set +x 00:09:15.951 ************************************ 00:09:15.951 START TEST nvme_scc 00:09:15.951 ************************************ 00:09:15.951 23:53:21 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:15.951 * Looking for test storage... 00:09:15.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:15.951 23:53:21 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.951 23:53:21 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.951 23:53:21 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.951 23:53:21 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:15.951 23:53:21 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:15.952 23:53:21 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.952 23:53:21 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.952 --rc genhtml_branch_coverage=1 00:09:15.952 --rc genhtml_function_coverage=1 00:09:15.952 --rc genhtml_legend=1 00:09:15.952 --rc geninfo_all_blocks=1 00:09:15.952 --rc geninfo_unexecuted_blocks=1 00:09:15.952 00:09:15.952 ' 00:09:15.952 23:53:21 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.952 --rc genhtml_branch_coverage=1 00:09:15.952 --rc genhtml_function_coverage=1 00:09:15.952 --rc genhtml_legend=1 00:09:15.952 --rc geninfo_all_blocks=1 00:09:15.952 --rc geninfo_unexecuted_blocks=1 00:09:15.952 00:09:15.952 ' 00:09:15.952 23:53:21 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.952 --rc genhtml_branch_coverage=1 00:09:15.952 --rc genhtml_function_coverage=1 00:09:15.952 --rc genhtml_legend=1 00:09:15.952 --rc geninfo_all_blocks=1 00:09:15.952 --rc geninfo_unexecuted_blocks=1 00:09:15.952 00:09:15.952 ' 00:09:15.952 23:53:21 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.952 --rc genhtml_branch_coverage=1 00:09:15.952 --rc genhtml_function_coverage=1 00:09:15.952 --rc genhtml_legend=1 00:09:15.952 --rc geninfo_all_blocks=1 00:09:15.952 --rc geninfo_unexecuted_blocks=1 00:09:15.952 00:09:15.952 ' 00:09:15.952 23:53:21 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.952 23:53:21 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.952 23:53:21 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.952 23:53:21 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.952 23:53:21 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.952 23:53:21 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:15.952 23:53:21 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:15.952 23:53:21 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:15.952 23:53:21 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.952 23:53:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:15.952 23:53:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:15.952 23:53:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:15.952 23:53:21 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:15.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.952 Waiting for block devices as requested 00:09:15.952 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.952 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.952 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.952 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.257 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:21.257 23:53:27 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:21.257 23:53:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:21.257 23:53:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:21.257 23:53:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.257 23:53:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.257 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.258 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.259 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.260 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.261 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:21.262 23:53:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:21.262 23:53:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:21.262 23:53:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.262 23:53:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.262 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:21.263 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.264 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.265 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:21.266 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:21.267 23:53:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:21.267 23:53:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:21.267 23:53:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.267 23:53:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.267 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:21.268 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:21.269 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.270 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:21.271 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:21.272 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.273 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.274 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:21.275 23:53:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:21.275 23:53:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:21.275 23:53:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.275 23:53:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.275 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:21.276 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.277 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:21.278 23:53:27 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:21.278 23:53:27 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:21.278 23:53:27 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:21.278 23:53:27 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:21.278 23:53:27 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:21.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.110 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.110 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.110 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.110 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.110 23:53:28 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:22.110 23:53:28 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:22.110 23:53:28 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.110 23:53:28 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:22.110 ************************************ 00:09:22.110 START TEST nvme_simple_copy 00:09:22.110 ************************************ 00:09:22.110 23:53:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:22.372 Initializing NVMe Controllers 00:09:22.372 Attaching to 0000:00:10.0 00:09:22.372 Controller supports SCC. Attached to 0000:00:10.0 00:09:22.372 Namespace ID: 1 size: 6GB 00:09:22.372 Initialization complete. 00:09:22.372 00:09:22.372 Controller QEMU NVMe Ctrl (12340 ) 00:09:22.372 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:22.372 Namespace Block Size:4096 00:09:22.372 Writing LBAs 0 to 63 with Random Data 00:09:22.372 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:22.372 LBAs matching Written Data: 64 00:09:22.372 00:09:22.372 real 0m0.249s 00:09:22.372 user 0m0.089s 00:09:22.372 sys 0m0.060s 00:09:22.372 23:53:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.372 ************************************ 00:09:22.372 END TEST nvme_simple_copy 00:09:22.372 ************************************ 00:09:22.372 23:53:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:22.372 00:09:22.372 real 0m7.379s 00:09:22.372 user 0m1.002s 00:09:22.372 sys 0m1.298s 00:09:22.372 23:53:28 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.372 23:53:28 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:22.372 ************************************ 00:09:22.372 END TEST nvme_scc 00:09:22.372 ************************************ 00:09:22.372 23:53:29 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:22.372 23:53:29 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:22.372 23:53:29 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:22.372 23:53:29 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:22.372 23:53:29 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:22.372 23:53:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.372 23:53:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.372 23:53:29 -- common/autotest_common.sh@10 -- # set +x 00:09:22.372 ************************************ 00:09:22.372 START TEST nvme_fdp 00:09:22.372 ************************************ 00:09:22.372 23:53:29 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:22.633 * Looking for test storage... 00:09:22.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:22.633 23:53:29 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.633 23:53:29 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.633 23:53:29 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.633 23:53:29 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:22.633 23:53:29 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.633 23:53:29 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.633 --rc genhtml_branch_coverage=1 00:09:22.633 --rc genhtml_function_coverage=1 00:09:22.633 --rc genhtml_legend=1 00:09:22.633 --rc geninfo_all_blocks=1 00:09:22.633 --rc geninfo_unexecuted_blocks=1 00:09:22.633 00:09:22.633 ' 00:09:22.633 23:53:29 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.633 --rc genhtml_branch_coverage=1 00:09:22.633 --rc genhtml_function_coverage=1 00:09:22.633 --rc genhtml_legend=1 00:09:22.633 --rc geninfo_all_blocks=1 00:09:22.633 --rc geninfo_unexecuted_blocks=1 00:09:22.633 00:09:22.633 ' 00:09:22.633 23:53:29 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.633 --rc genhtml_branch_coverage=1 00:09:22.633 --rc genhtml_function_coverage=1 00:09:22.633 --rc genhtml_legend=1 00:09:22.633 --rc geninfo_all_blocks=1 00:09:22.633 --rc geninfo_unexecuted_blocks=1 00:09:22.633 00:09:22.633 ' 00:09:22.633 23:53:29 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.633 --rc genhtml_branch_coverage=1 00:09:22.633 --rc genhtml_function_coverage=1 00:09:22.633 --rc genhtml_legend=1 00:09:22.633 --rc geninfo_all_blocks=1 00:09:22.633 --rc geninfo_unexecuted_blocks=1 00:09:22.633 00:09:22.633 ' 00:09:22.633 23:53:29 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:22.633 23:53:29 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:22.633 23:53:29 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:22.633 23:53:29 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:22.633 23:53:29 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.633 23:53:29 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.633 23:53:29 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.634 23:53:29 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.634 23:53:29 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.634 23:53:29 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:22.634 23:53:29 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.634 23:53:29 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:22.634 23:53:29 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:22.634 23:53:29 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:22.634 23:53:29 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:22.634 23:53:29 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:22.634 23:53:29 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:22.634 23:53:29 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:22.634 23:53:29 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:22.634 23:53:29 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:22.634 23:53:29 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.634 23:53:29 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:23.154 Waiting for block devices as requested 00:09:23.154 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.154 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.154 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.154 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.443 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:28.443 23:53:34 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:28.443 23:53:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.443 23:53:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:28.443 23:53:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.443 23:53:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.443 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.444 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.445 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.446 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.447 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:28.448 23:53:34 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:28.449 23:53:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.449 23:53:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:28.449 23:53:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.449 23:53:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.449 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.450 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.451 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.452 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.453 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:28.454 23:53:35 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.454 23:53:35 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:28.454 23:53:35 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.454 23:53:35 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:28.454 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.455 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.456 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.457 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.458 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:28.459 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:28.460 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:28.463 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:28.724 23:53:35 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.724 23:53:35 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:28.724 23:53:35 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.724 23:53:35 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:28.724 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:28.726 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:28.727 23:53:35 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:28.727 23:53:35 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:28.727 23:53:35 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:28.727 23:53:35 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:28.727 23:53:35 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:29.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.569 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.569 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.569 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.569 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.826 23:53:36 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:29.826 23:53:36 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:29.826 23:53:36 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.826 23:53:36 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:29.826 ************************************ 00:09:29.826 START TEST nvme_flexible_data_placement 00:09:29.826 ************************************ 00:09:29.826 23:53:36 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:30.085 Initializing NVMe Controllers 00:09:30.085 Attaching to 0000:00:13.0 00:09:30.085 Controller supports FDP Attached to 0000:00:13.0 00:09:30.085 Namespace ID: 1 Endurance Group ID: 1 00:09:30.085 Initialization complete. 00:09:30.085 00:09:30.085 ================================== 00:09:30.085 == FDP tests for Namespace: #01 == 00:09:30.085 ================================== 00:09:30.085 00:09:30.085 Get Feature: FDP: 00:09:30.085 ================= 00:09:30.085 Enabled: Yes 00:09:30.085 FDP configuration Index: 0 00:09:30.085 00:09:30.085 FDP configurations log page 00:09:30.085 =========================== 00:09:30.085 Number of FDP configurations: 1 00:09:30.085 Version: 0 00:09:30.085 Size: 112 00:09:30.085 FDP Configuration Descriptor: 0 00:09:30.085 Descriptor Size: 96 00:09:30.085 Reclaim Group Identifier format: 2 00:09:30.085 FDP Volatile Write Cache: Not Present 00:09:30.085 FDP Configuration: Valid 00:09:30.085 Vendor Specific Size: 0 00:09:30.085 Number of Reclaim Groups: 2 00:09:30.085 Number of Recalim Unit Handles: 8 00:09:30.085 Max Placement Identifiers: 128 00:09:30.085 Number of Namespaces Suppprted: 256 00:09:30.085 Reclaim unit Nominal Size: 6000000 bytes 00:09:30.085 Estimated Reclaim Unit Time Limit: Not Reported 00:09:30.085 RUH Desc #000: RUH Type: Initially Isolated 00:09:30.085 RUH Desc #001: RUH Type: Initially Isolated 00:09:30.085 RUH Desc #002: RUH Type: Initially Isolated 00:09:30.085 RUH Desc #003: RUH Type: Initially Isolated 00:09:30.085 RUH Desc #004: RUH Type: Initially Isolated 00:09:30.085 RUH Desc #005: RUH Type: Initially Isolated 00:09:30.085 RUH Desc #006: RUH Type: Initially Isolated 00:09:30.085 RUH Desc #007: RUH Type: Initially Isolated 00:09:30.085 00:09:30.085 FDP reclaim unit handle usage log page 00:09:30.085 ====================================== 00:09:30.085 Number of Reclaim Unit Handles: 8 00:09:30.085 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:30.085 RUH Usage Desc #001: RUH Attributes: Unused 00:09:30.085 RUH Usage Desc #002: RUH Attributes: Unused 00:09:30.085 RUH Usage Desc #003: RUH Attributes: Unused 00:09:30.085 RUH Usage Desc #004: RUH Attributes: Unused 00:09:30.085 RUH Usage Desc #005: RUH Attributes: Unused 00:09:30.085 RUH Usage Desc #006: RUH Attributes: Unused 00:09:30.085 RUH Usage Desc #007: RUH Attributes: Unused 00:09:30.085 00:09:30.085 FDP statistics log page 00:09:30.085 ======================= 00:09:30.085 Host bytes with metadata written: 1088913408 00:09:30.085 Media bytes with metadata written: 1089323008 00:09:30.085 Media bytes erased: 0 00:09:30.085 00:09:30.085 FDP Reclaim unit handle status 00:09:30.085 ============================== 00:09:30.085 Number of RUHS descriptors: 2 00:09:30.085 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001188 00:09:30.085 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:30.085 00:09:30.085 FDP write on placement id: 0 success 00:09:30.085 00:09:30.085 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:30.085 00:09:30.085 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:30.085 00:09:30.085 Get Feature: FDP Events for Placement handle: #0 00:09:30.085 ======================== 00:09:30.085 Number of FDP Events: 6 00:09:30.085 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:30.085 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:30.085 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:30.085 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:30.085 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:30.085 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:30.085 00:09:30.085 FDP events log page 00:09:30.085 =================== 00:09:30.085 Number of FDP events: 1 00:09:30.085 FDP Event #0: 00:09:30.085 Event Type: RU Not Written to Capacity 00:09:30.085 Placement Identifier: Valid 00:09:30.085 NSID: Valid 00:09:30.085 Location: Valid 00:09:30.085 Placement Identifier: 0 00:09:30.085 Event Timestamp: 6 00:09:30.085 Namespace Identifier: 1 00:09:30.085 Reclaim Group Identifier: 0 00:09:30.085 Reclaim Unit Handle Identifier: 0 00:09:30.085 00:09:30.085 FDP test passed 00:09:30.085 00:09:30.085 real 0m0.234s 00:09:30.085 user 0m0.069s 00:09:30.085 sys 0m0.063s 00:09:30.085 23:53:36 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.085 ************************************ 00:09:30.085 END TEST nvme_flexible_data_placement 00:09:30.085 ************************************ 00:09:30.085 23:53:36 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:30.085 ************************************ 00:09:30.085 END TEST nvme_fdp 00:09:30.085 ************************************ 00:09:30.085 00:09:30.085 real 0m7.564s 00:09:30.085 user 0m1.016s 00:09:30.085 sys 0m1.324s 00:09:30.085 23:53:36 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.085 23:53:36 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:30.085 23:53:36 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:30.085 23:53:36 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:30.085 23:53:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.085 23:53:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.085 23:53:36 -- common/autotest_common.sh@10 -- # set +x 00:09:30.085 ************************************ 00:09:30.085 START TEST nvme_rpc 00:09:30.085 ************************************ 00:09:30.085 23:53:36 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:30.085 * Looking for test storage... 00:09:30.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.085 23:53:36 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.085 23:53:36 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.085 23:53:36 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.344 23:53:36 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.344 --rc genhtml_branch_coverage=1 00:09:30.344 --rc genhtml_function_coverage=1 00:09:30.344 --rc genhtml_legend=1 00:09:30.344 --rc geninfo_all_blocks=1 00:09:30.344 --rc geninfo_unexecuted_blocks=1 00:09:30.344 00:09:30.344 ' 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.344 --rc genhtml_branch_coverage=1 00:09:30.344 --rc genhtml_function_coverage=1 00:09:30.344 --rc genhtml_legend=1 00:09:30.344 --rc geninfo_all_blocks=1 00:09:30.344 --rc geninfo_unexecuted_blocks=1 00:09:30.344 00:09:30.344 ' 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.344 --rc genhtml_branch_coverage=1 00:09:30.344 --rc genhtml_function_coverage=1 00:09:30.344 --rc genhtml_legend=1 00:09:30.344 --rc geninfo_all_blocks=1 00:09:30.344 --rc geninfo_unexecuted_blocks=1 00:09:30.344 00:09:30.344 ' 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.344 --rc genhtml_branch_coverage=1 00:09:30.344 --rc genhtml_function_coverage=1 00:09:30.344 --rc genhtml_legend=1 00:09:30.344 --rc geninfo_all_blocks=1 00:09:30.344 --rc geninfo_unexecuted_blocks=1 00:09:30.344 00:09:30.344 ' 00:09:30.344 23:53:36 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.344 23:53:36 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:30.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.344 23:53:36 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:30.344 23:53:36 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65712 00:09:30.344 23:53:36 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:30.344 23:53:36 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:30.344 23:53:36 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65712 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65712 ']' 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.344 23:53:36 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 [2024-11-18 23:53:36.938755] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:30.344 [2024-11-18 23:53:36.939034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65712 ] 00:09:30.603 [2024-11-18 23:53:37.098902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:30.603 [2024-11-18 23:53:37.198708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.603 [2024-11-18 23:53:37.198778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.169 23:53:37 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.169 23:53:37 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:31.169 23:53:37 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:31.427 Nvme0n1 00:09:31.428 23:53:38 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:31.428 23:53:38 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:31.686 request: 00:09:31.686 { 00:09:31.686 "bdev_name": "Nvme0n1", 00:09:31.686 "filename": "non_existing_file", 00:09:31.686 "method": "bdev_nvme_apply_firmware", 00:09:31.686 "req_id": 1 00:09:31.686 } 00:09:31.686 Got JSON-RPC error response 00:09:31.686 response: 00:09:31.686 { 00:09:31.686 "code": -32603, 00:09:31.686 "message": "open file failed." 00:09:31.686 } 00:09:31.686 23:53:38 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:31.686 23:53:38 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:31.686 23:53:38 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:31.944 23:53:38 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:31.944 23:53:38 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65712 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65712 ']' 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65712 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65712 00:09:31.944 killing process with pid 65712 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65712' 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65712 00:09:31.944 23:53:38 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65712 00:09:33.319 ************************************ 00:09:33.319 END TEST nvme_rpc 00:09:33.319 ************************************ 00:09:33.319 00:09:33.319 real 0m3.231s 00:09:33.319 user 0m6.127s 00:09:33.319 sys 0m0.491s 00:09:33.319 23:53:39 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.319 23:53:39 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.319 23:53:39 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:33.319 23:53:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.319 23:53:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.319 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:09:33.319 ************************************ 00:09:33.319 START TEST nvme_rpc_timeouts 00:09:33.319 ************************************ 00:09:33.319 23:53:39 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:33.319 * Looking for test storage... 00:09:33.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:33.319 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.319 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.319 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.578 23:53:40 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.578 --rc genhtml_branch_coverage=1 00:09:33.578 --rc genhtml_function_coverage=1 00:09:33.578 --rc genhtml_legend=1 00:09:33.578 --rc geninfo_all_blocks=1 00:09:33.578 --rc geninfo_unexecuted_blocks=1 00:09:33.578 00:09:33.578 ' 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.578 --rc genhtml_branch_coverage=1 00:09:33.578 --rc genhtml_function_coverage=1 00:09:33.578 --rc genhtml_legend=1 00:09:33.578 --rc geninfo_all_blocks=1 00:09:33.578 --rc geninfo_unexecuted_blocks=1 00:09:33.578 00:09:33.578 ' 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.578 --rc genhtml_branch_coverage=1 00:09:33.578 --rc genhtml_function_coverage=1 00:09:33.578 --rc genhtml_legend=1 00:09:33.578 --rc geninfo_all_blocks=1 00:09:33.578 --rc geninfo_unexecuted_blocks=1 00:09:33.578 00:09:33.578 ' 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.578 --rc genhtml_branch_coverage=1 00:09:33.578 --rc genhtml_function_coverage=1 00:09:33.578 --rc genhtml_legend=1 00:09:33.578 --rc geninfo_all_blocks=1 00:09:33.578 --rc geninfo_unexecuted_blocks=1 00:09:33.578 00:09:33.578 ' 00:09:33.578 23:53:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.578 23:53:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65777 00:09:33.578 23:53:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65777 00:09:33.578 23:53:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65809 00:09:33.578 23:53:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:33.578 23:53:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65809 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65809 ']' 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.578 23:53:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.578 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:33.578 [2024-11-18 23:53:40.157491] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:33.578 [2024-11-18 23:53:40.157609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65809 ] 00:09:33.836 [2024-11-18 23:53:40.309758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.836 [2024-11-18 23:53:40.408201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.836 [2024-11-18 23:53:40.408209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.403 Checking default timeout settings: 00:09:34.403 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.403 23:53:40 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:34.403 23:53:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:34.403 23:53:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:34.662 Making settings changes with rpc: 00:09:34.662 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:34.662 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:34.920 Check default vs. modified settings: 00:09:34.920 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:34.920 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65777 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65777 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.179 Setting action_on_timeout is changed as expected. 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65777 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65777 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.179 Setting timeout_us is changed as expected. 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65777 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65777 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.179 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:35.438 Setting timeout_admin_us is changed as expected. 00:09:35.438 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:35.438 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:35.438 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:35.438 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:35.438 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65777 /tmp/settings_modified_65777 00:09:35.438 23:53:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65809 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65809 ']' 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65809 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65809 00:09:35.438 killing process with pid 65809 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65809' 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65809 00:09:35.438 23:53:41 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65809 00:09:36.814 RPC TIMEOUT SETTING TEST PASSED. 00:09:36.814 23:53:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:36.814 ************************************ 00:09:36.814 END TEST nvme_rpc_timeouts 00:09:36.814 ************************************ 00:09:36.814 00:09:36.814 real 0m3.256s 00:09:36.814 user 0m6.344s 00:09:36.814 sys 0m0.475s 00:09:36.814 23:53:43 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.814 23:53:43 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:36.814 23:53:43 -- spdk/autotest.sh@239 -- # uname -s 00:09:36.814 23:53:43 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:36.814 23:53:43 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:36.814 23:53:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.814 23:53:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.814 23:53:43 -- common/autotest_common.sh@10 -- # set +x 00:09:36.814 ************************************ 00:09:36.814 START TEST sw_hotplug 00:09:36.814 ************************************ 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:36.814 * Looking for test storage... 00:09:36.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.814 23:53:43 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.814 --rc genhtml_branch_coverage=1 00:09:36.814 --rc genhtml_function_coverage=1 00:09:36.814 --rc genhtml_legend=1 00:09:36.814 --rc geninfo_all_blocks=1 00:09:36.814 --rc geninfo_unexecuted_blocks=1 00:09:36.814 00:09:36.814 ' 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.814 --rc genhtml_branch_coverage=1 00:09:36.814 --rc genhtml_function_coverage=1 00:09:36.814 --rc genhtml_legend=1 00:09:36.814 --rc geninfo_all_blocks=1 00:09:36.814 --rc geninfo_unexecuted_blocks=1 00:09:36.814 00:09:36.814 ' 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.814 --rc genhtml_branch_coverage=1 00:09:36.814 --rc genhtml_function_coverage=1 00:09:36.814 --rc genhtml_legend=1 00:09:36.814 --rc geninfo_all_blocks=1 00:09:36.814 --rc geninfo_unexecuted_blocks=1 00:09:36.814 00:09:36.814 ' 00:09:36.814 23:53:43 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.814 --rc genhtml_branch_coverage=1 00:09:36.814 --rc genhtml_function_coverage=1 00:09:36.814 --rc genhtml_legend=1 00:09:36.814 --rc geninfo_all_blocks=1 00:09:36.814 --rc geninfo_unexecuted_blocks=1 00:09:36.814 00:09:36.814 ' 00:09:36.814 23:53:43 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.332 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:37.332 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:37.332 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:37.332 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:37.332 23:53:43 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:37.332 23:53:43 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:37.332 23:53:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:37.332 23:53:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:37.332 23:53:43 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:37.332 23:53:43 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:37.332 23:53:43 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:37.332 23:53:43 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:37.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.849 Waiting for block devices as requested 00:09:37.849 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.849 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.849 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.108 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.399 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:43.399 23:53:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:43.399 23:53:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.399 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:43.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.698 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:43.698 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:43.956 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.956 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.956 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:43.956 23:53:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66662 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:44.214 23:53:50 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:44.214 23:53:50 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:44.214 23:53:50 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:44.214 23:53:50 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:44.214 23:53:50 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:44.214 23:53:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:44.214 Initializing NVMe Controllers 00:09:44.214 Attaching to 0000:00:10.0 00:09:44.473 Attaching to 0000:00:11.0 00:09:44.473 Attached to 0000:00:11.0 00:09:44.473 Attached to 0000:00:10.0 00:09:44.473 Initialization complete. Starting I/O... 00:09:44.473 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:44.473 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:44.473 00:09:45.412 QEMU NVMe Ctrl (12341 ): 2624 I/Os completed (+2624) 00:09:45.412 QEMU NVMe Ctrl (12340 ): 2624 I/Os completed (+2624) 00:09:45.412 00:09:46.348 QEMU NVMe Ctrl (12341 ): 5895 I/Os completed (+3271) 00:09:46.348 QEMU NVMe Ctrl (12340 ): 5899 I/Os completed (+3275) 00:09:46.348 00:09:47.282 QEMU NVMe Ctrl (12341 ): 9527 I/Os completed (+3632) 00:09:47.282 QEMU NVMe Ctrl (12340 ): 9564 I/Os completed (+3665) 00:09:47.282 00:09:48.656 QEMU NVMe Ctrl (12341 ): 12807 I/Os completed (+3280) 00:09:48.656 QEMU NVMe Ctrl (12340 ): 12845 I/Os completed (+3281) 00:09:48.656 00:09:49.229 QEMU NVMe Ctrl (12341 ): 16574 I/Os completed (+3767) 00:09:49.229 QEMU NVMe Ctrl (12340 ): 16619 I/Os completed (+3774) 00:09:49.229 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:50.173 [2024-11-18 23:53:56.722229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:50.173 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:50.173 [2024-11-18 23:53:56.723476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.723515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.723531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.723545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:50.173 [2024-11-18 23:53:56.725027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.725139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.725170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.725228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:50.173 [2024-11-18 23:53:56.743055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:50.173 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:50.173 [2024-11-18 23:53:56.744013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.744117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.744202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.744230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:50.173 [2024-11-18 23:53:56.745671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.745704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.745716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 [2024-11-18 23:53:56.745726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.173 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:50.173 EAL: Scan for (pci) bus failed. 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:50.173 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:50.435 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:50.435 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:50.435 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:50.435 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:50.435 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:50.435 Attaching to 0000:00:10.0 00:09:50.435 Attached to 0000:00:10.0 00:09:50.435 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:50.435 00:09:50.435 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:50.435 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:50.435 23:53:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:50.435 Attaching to 0000:00:11.0 00:09:50.435 Attached to 0000:00:11.0 00:09:51.379 QEMU NVMe Ctrl (12340 ): 3723 I/Os completed (+3723) 00:09:51.379 QEMU NVMe Ctrl (12341 ): 3395 I/Os completed (+3395) 00:09:51.379 00:09:52.322 QEMU NVMe Ctrl (12340 ): 7381 I/Os completed (+3658) 00:09:52.322 QEMU NVMe Ctrl (12341 ): 7042 I/Os completed (+3647) 00:09:52.322 00:09:53.265 QEMU NVMe Ctrl (12340 ): 11060 I/Os completed (+3679) 00:09:53.265 QEMU NVMe Ctrl (12341 ): 10696 I/Os completed (+3654) 00:09:53.265 00:09:54.649 QEMU NVMe Ctrl (12340 ): 14688 I/Os completed (+3628) 00:09:54.649 QEMU NVMe Ctrl (12341 ): 14324 I/Os completed (+3628) 00:09:54.649 00:09:55.591 QEMU NVMe Ctrl (12340 ): 18345 I/Os completed (+3657) 00:09:55.591 QEMU NVMe Ctrl (12341 ): 17977 I/Os completed (+3653) 00:09:55.591 00:09:56.533 QEMU NVMe Ctrl (12340 ): 21999 I/Os completed (+3654) 00:09:56.533 QEMU NVMe Ctrl (12341 ): 21621 I/Os completed (+3644) 00:09:56.533 00:09:57.476 QEMU NVMe Ctrl (12340 ): 25640 I/Os completed (+3641) 00:09:57.476 QEMU NVMe Ctrl (12341 ): 25279 I/Os completed (+3658) 00:09:57.476 00:09:58.419 QEMU NVMe Ctrl (12340 ): 29303 I/Os completed (+3663) 00:09:58.419 QEMU NVMe Ctrl (12341 ): 28959 I/Os completed (+3680) 00:09:58.419 00:09:59.361 QEMU NVMe Ctrl (12340 ): 33034 I/Os completed (+3731) 00:09:59.361 QEMU NVMe Ctrl (12341 ): 32692 I/Os completed (+3733) 00:09:59.361 00:10:00.302 QEMU NVMe Ctrl (12340 ): 36680 I/Os completed (+3646) 00:10:00.303 QEMU NVMe Ctrl (12341 ): 36356 I/Os completed (+3664) 00:10:00.303 00:10:01.245 QEMU NVMe Ctrl (12340 ): 40347 I/Os completed (+3667) 00:10:01.245 QEMU NVMe Ctrl (12341 ): 40019 I/Os completed (+3663) 00:10:01.245 00:10:02.640 QEMU NVMe Ctrl (12340 ): 44015 I/Os completed (+3668) 00:10:02.640 QEMU NVMe Ctrl (12341 ): 43698 I/Os completed (+3679) 00:10:02.640 00:10:02.640 23:54:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:02.640 23:54:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:02.640 23:54:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.640 23:54:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.640 [2024-11-18 23:54:09.001289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:02.640 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:02.640 [2024-11-18 23:54:09.002296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 [2024-11-18 23:54:09.002686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 [2024-11-18 23:54:09.002790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 [2024-11-18 23:54:09.002822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:02.640 [2024-11-18 23:54:09.004521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 [2024-11-18 23:54:09.004620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 [2024-11-18 23:54:09.004648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 [2024-11-18 23:54:09.004702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.640 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.640 [2024-11-18 23:54:09.025934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:02.640 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:02.640 [2024-11-18 23:54:09.026916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 [2024-11-18 23:54:09.026954] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.640 [2024-11-18 23:54:09.026989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.641 [2024-11-18 23:54:09.027002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.641 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:02.641 [2024-11-18 23:54:09.028335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.641 [2024-11-18 23:54:09.028363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.641 [2024-11-18 23:54:09.028375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.641 [2024-11-18 23:54:09.028387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:02.641 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:02.641 EAL: Scan for (pci) bus failed. 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:02.641 Attaching to 0000:00:10.0 00:10:02.641 Attached to 0000:00:10.0 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:02.641 23:54:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:02.641 Attaching to 0000:00:11.0 00:10:02.641 Attached to 0000:00:11.0 00:10:03.625 QEMU NVMe Ctrl (12340 ): 2676 I/Os completed (+2676) 00:10:03.625 QEMU NVMe Ctrl (12341 ): 2345 I/Os completed (+2345) 00:10:03.625 00:10:04.246 QEMU NVMe Ctrl (12340 ): 6306 I/Os completed (+3630) 00:10:04.246 QEMU NVMe Ctrl (12341 ): 5982 I/Os completed (+3637) 00:10:04.246 00:10:05.631 QEMU NVMe Ctrl (12340 ): 10091 I/Os completed (+3785) 00:10:05.631 QEMU NVMe Ctrl (12341 ): 9769 I/Os completed (+3787) 00:10:05.631 00:10:06.574 QEMU NVMe Ctrl (12340 ): 13740 I/Os completed (+3649) 00:10:06.574 QEMU NVMe Ctrl (12341 ): 13425 I/Os completed (+3656) 00:10:06.574 00:10:07.518 QEMU NVMe Ctrl (12340 ): 17227 I/Os completed (+3487) 00:10:07.518 QEMU NVMe Ctrl (12341 ): 16987 I/Os completed (+3562) 00:10:07.518 00:10:08.462 QEMU NVMe Ctrl (12340 ): 20059 I/Os completed (+2832) 00:10:08.462 QEMU NVMe Ctrl (12341 ): 19822 I/Os completed (+2835) 00:10:08.462 00:10:09.407 QEMU NVMe Ctrl (12340 ): 22870 I/Os completed (+2811) 00:10:09.407 QEMU NVMe Ctrl (12341 ): 22651 I/Os completed (+2829) 00:10:09.407 00:10:10.352 QEMU NVMe Ctrl (12340 ): 25510 I/Os completed (+2640) 00:10:10.352 QEMU NVMe Ctrl (12341 ): 25291 I/Os completed (+2640) 00:10:10.352 00:10:11.315 QEMU NVMe Ctrl (12340 ): 28178 I/Os completed (+2668) 00:10:11.315 QEMU NVMe Ctrl (12341 ): 27959 I/Os completed (+2668) 00:10:11.315 00:10:12.257 QEMU NVMe Ctrl (12340 ): 30902 I/Os completed (+2724) 00:10:12.257 QEMU NVMe Ctrl (12341 ): 30679 I/Os completed (+2720) 00:10:12.257 00:10:13.637 QEMU NVMe Ctrl (12340 ): 34586 I/Os completed (+3684) 00:10:13.637 QEMU NVMe Ctrl (12341 ): 34362 I/Os completed (+3683) 00:10:13.637 00:10:14.580 QEMU NVMe Ctrl (12340 ): 37561 I/Os completed (+2975) 00:10:14.580 QEMU NVMe Ctrl (12341 ): 37290 I/Os completed (+2928) 00:10:14.580 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:14.840 [2024-11-18 23:54:21.274283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:14.840 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:14.840 [2024-11-18 23:54:21.275767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.275964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.276006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.276082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:14.840 [2024-11-18 23:54:21.278352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.278450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.278485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.278518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:14.840 [2024-11-18 23:54:21.296240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:14.840 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:14.840 [2024-11-18 23:54:21.297557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.297748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.297842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.297876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:14.840 [2024-11-18 23:54:21.299949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.300099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.300145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 [2024-11-18 23:54:21.300160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:14.840 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:14.840 EAL: Scan for (pci) bus failed. 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:14.840 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:14.840 Attaching to 0000:00:10.0 00:10:14.840 Attached to 0000:00:10.0 00:10:15.101 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:15.101 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.101 23:54:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:15.102 Attaching to 0000:00:11.0 00:10:15.102 Attached to 0000:00:11.0 00:10:15.102 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:15.102 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:15.102 [2024-11-18 23:54:21.593607] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:27.334 23:54:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:27.334 23:54:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.334 23:54:33 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.86 00:10:27.334 23:54:33 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.86 00:10:27.334 23:54:33 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:27.334 23:54:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.86 00:10:27.334 23:54:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.86 2 00:10:27.334 remove_attach_helper took 42.86s to complete (handling 2 nvme drive(s)) 23:54:33 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:33.925 23:54:39 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66662 00:10:33.925 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66662) - No such process 00:10:33.925 23:54:39 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66662 00:10:33.925 23:54:39 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:33.925 23:54:39 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:33.925 23:54:39 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:33.925 23:54:39 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67212 00:10:33.925 23:54:39 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:33.925 23:54:39 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:33.925 23:54:39 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67212 00:10:33.925 23:54:39 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67212 ']' 00:10:33.925 23:54:39 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.925 23:54:39 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.925 23:54:39 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.925 23:54:39 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.925 23:54:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.925 [2024-11-18 23:54:39.682702] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:33.925 [2024-11-18 23:54:39.683106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67212 ] 00:10:33.925 [2024-11-18 23:54:39.841263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.925 [2024-11-18 23:54:39.968826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:34.184 23:54:40 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.184 23:54:40 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:34.184 23:54:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:34.184 23:54:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:34.184 23:54:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:34.184 23:54:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:34.184 23:54:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:34.184 23:54:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:34.184 23:54:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:34.184 23:54:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:40.767 23:54:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.767 23:54:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.767 23:54:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:40.767 23:54:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:40.767 [2024-11-18 23:54:46.764481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:40.767 [2024-11-18 23:54:46.765818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.767 [2024-11-18 23:54:46.765858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.767 [2024-11-18 23:54:46.765873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.767 [2024-11-18 23:54:46.765896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.767 [2024-11-18 23:54:46.765904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.767 [2024-11-18 23:54:46.765913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.767 [2024-11-18 23:54:46.765921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.767 [2024-11-18 23:54:46.765929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.767 [2024-11-18 23:54:46.765935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.767 [2024-11-18 23:54:46.765947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.767 [2024-11-18 23:54:46.765954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.767 [2024-11-18 23:54:46.765962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.767 [2024-11-18 23:54:47.164461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:40.767 [2024-11-18 23:54:47.165793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.767 [2024-11-18 23:54:47.165827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.767 [2024-11-18 23:54:47.165840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.767 [2024-11-18 23:54:47.165857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.767 [2024-11-18 23:54:47.165867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.767 [2024-11-18 23:54:47.165874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.767 [2024-11-18 23:54:47.165884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.767 [2024-11-18 23:54:47.165891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.767 [2024-11-18 23:54:47.165899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.767 [2024-11-18 23:54:47.165906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.767 [2024-11-18 23:54:47.165914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.767 [2024-11-18 23:54:47.165921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:40.767 23:54:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.767 23:54:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.767 23:54:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.767 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:41.025 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:41.025 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.025 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.025 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.025 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:41.025 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:41.025 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.025 23:54:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.216 23:54:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.216 23:54:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.216 23:54:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.216 23:54:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.216 23:54:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.216 23:54:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.216 [2024-11-18 23:54:59.664678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:53.216 [2024-11-18 23:54:59.666655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.216 [2024-11-18 23:54:59.666851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.216 [2024-11-18 23:54:59.666943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.216 [2024-11-18 23:54:59.666998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.216 [2024-11-18 23:54:59.667022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.216 [2024-11-18 23:54:59.667055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.216 [2024-11-18 23:54:59.667086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.216 [2024-11-18 23:54:59.667191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.216 [2024-11-18 23:54:59.667230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.216 [2024-11-18 23:54:59.667266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.216 [2024-11-18 23:54:59.667288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.216 [2024-11-18 23:54:59.667321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:53.216 23:54:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:53.784 [2024-11-18 23:55:00.164637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:53.784 [2024-11-18 23:55:00.165885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.784 [2024-11-18 23:55:00.165992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.784 [2024-11-18 23:55:00.166057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.784 [2024-11-18 23:55:00.166113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.784 [2024-11-18 23:55:00.166150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.784 [2024-11-18 23:55:00.166204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.784 [2024-11-18 23:55:00.166232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.784 [2024-11-18 23:55:00.166388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.784 [2024-11-18 23:55:00.166492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.784 [2024-11-18 23:55:00.166518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.784 [2024-11-18 23:55:00.166536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.784 [2024-11-18 23:55:00.166559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.784 23:55:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.784 23:55:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.784 23:55:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.784 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.785 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.785 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:53.785 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:53.785 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.785 23:55:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.010 23:55:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.010 23:55:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.010 23:55:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.010 23:55:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.010 23:55:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.010 23:55:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:06.010 23:55:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:06.010 [2024-11-18 23:55:12.564816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.010 [2024-11-18 23:55:12.566006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.010 [2024-11-18 23:55:12.566043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.010 [2024-11-18 23:55:12.566055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.010 [2024-11-18 23:55:12.566071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.010 [2024-11-18 23:55:12.566078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.010 [2024-11-18 23:55:12.566089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.010 [2024-11-18 23:55:12.566096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.010 [2024-11-18 23:55:12.566103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.010 [2024-11-18 23:55:12.566110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.010 [2024-11-18 23:55:12.566119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.010 [2024-11-18 23:55:12.566134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.010 [2024-11-18 23:55:12.566142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.579 [2024-11-18 23:55:12.964813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.579 [2024-11-18 23:55:12.966043] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.579 [2024-11-18 23:55:12.966074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.579 [2024-11-18 23:55:12.966085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.579 [2024-11-18 23:55:12.966097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.579 [2024-11-18 23:55:12.966105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.579 [2024-11-18 23:55:12.966112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.579 [2024-11-18 23:55:12.966120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.579 [2024-11-18 23:55:12.966140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.579 [2024-11-18 23:55:12.966149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.579 [2024-11-18 23:55:12.966156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.579 [2024-11-18 23:55:12.966163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.579 [2024-11-18 23:55:12.966170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.579 23:55:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.579 23:55:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.579 23:55:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.579 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.580 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.580 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:06.840 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:06.840 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.840 23:55:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.67 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.67 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.67 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.67 2 00:11:19.079 remove_attach_helper took 44.67s to complete (handling 2 nvme drive(s)) 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:19.079 23:55:25 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:19.079 23:55:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:25.667 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:25.667 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:25.667 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:25.667 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.668 23:55:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.668 23:55:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.668 23:55:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:25.668 [2024-11-18 23:55:31.466325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:25.668 [2024-11-18 23:55:31.467396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.668 [2024-11-18 23:55:31.467432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.668 [2024-11-18 23:55:31.467443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.668 [2024-11-18 23:55:31.467460] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.668 [2024-11-18 23:55:31.467467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.668 [2024-11-18 23:55:31.467478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.668 [2024-11-18 23:55:31.467485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.668 [2024-11-18 23:55:31.467493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.668 [2024-11-18 23:55:31.467500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.668 [2024-11-18 23:55:31.467508] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.668 [2024-11-18 23:55:31.467514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.668 [2024-11-18 23:55:31.467524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.668 23:55:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.668 23:55:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.668 [2024-11-18 23:55:31.966476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:25.668 [2024-11-18 23:55:31.967344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.668 [2024-11-18 23:55:31.967361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.668 [2024-11-18 23:55:31.967372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.668 [2024-11-18 23:55:31.967382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.668 [2024-11-18 23:55:31.967390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.668 [2024-11-18 23:55:31.967397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.668 [2024-11-18 23:55:31.967405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.668 [2024-11-18 23:55:31.967411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.668 [2024-11-18 23:55:31.967419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.668 [2024-11-18 23:55:31.967426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.668 [2024-11-18 23:55:31.967434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.668 [2024-11-18 23:55:31.967440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.668 23:55:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:25.668 23:55:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.929 23:55:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.929 23:55:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.929 23:55:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.929 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:26.190 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:26.190 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.190 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.190 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.190 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.190 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:26.190 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.190 23:55:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.439 23:55:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.439 23:55:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.439 23:55:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.439 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.439 23:55:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.439 23:55:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.439 23:55:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.439 [2024-11-18 23:55:44.866685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:38.439 [2024-11-18 23:55:44.867605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.439 [2024-11-18 23:55:44.867631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.439 [2024-11-18 23:55:44.867642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.439 [2024-11-18 23:55:44.867659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.440 [2024-11-18 23:55:44.867666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.440 [2024-11-18 23:55:44.867673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.440 [2024-11-18 23:55:44.867680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.440 [2024-11-18 23:55:44.867688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.440 [2024-11-18 23:55:44.867695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.440 [2024-11-18 23:55:44.867703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.440 [2024-11-18 23:55:44.867709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.440 [2024-11-18 23:55:44.867716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.440 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:38.440 23:55:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.701 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:38.701 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.701 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.701 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.701 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.701 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.701 23:55:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.701 23:55:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.962 23:55:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.962 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:38.962 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.962 [2024-11-18 23:55:45.466688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:38.962 [2024-11-18 23:55:45.467805] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.962 [2024-11-18 23:55:45.467832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.962 [2024-11-18 23:55:45.467844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.962 [2024-11-18 23:55:45.467856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.962 [2024-11-18 23:55:45.467867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.962 [2024-11-18 23:55:45.467874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.962 [2024-11-18 23:55:45.467882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.962 [2024-11-18 23:55:45.467889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.962 [2024-11-18 23:55:45.467896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.962 [2024-11-18 23:55:45.467904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.962 [2024-11-18 23:55:45.467911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.962 [2024-11-18 23:55:45.467917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.534 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:39.534 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.534 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.534 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.534 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.534 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.534 23:55:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.534 23:55:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.534 23:55:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.534 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.534 23:55:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.534 23:55:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.833 23:55:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.833 23:55:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.833 23:55:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.833 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.833 [2024-11-18 23:55:58.266886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:51.833 [2024-11-18 23:55:58.267877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.833 [2024-11-18 23:55:58.267905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.833 [2024-11-18 23:55:58.267915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.833 [2024-11-18 23:55:58.267931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.833 [2024-11-18 23:55:58.267938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.833 [2024-11-18 23:55:58.267946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.833 [2024-11-18 23:55:58.267953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.833 [2024-11-18 23:55:58.267964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.833 [2024-11-18 23:55:58.267970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.833 [2024-11-18 23:55:58.267978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.834 [2024-11-18 23:55:58.267984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.834 [2024-11-18 23:55:58.267992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.834 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:51.834 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.834 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.834 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.834 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.834 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.834 23:55:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.834 23:55:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.834 23:55:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.834 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:51.834 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.094 [2024-11-18 23:55:58.766881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:52.094 [2024-11-18 23:55:58.768038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.094 [2024-11-18 23:55:58.768063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.094 [2024-11-18 23:55:58.768074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.094 [2024-11-18 23:55:58.768085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.094 [2024-11-18 23:55:58.768093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.094 [2024-11-18 23:55:58.768100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.094 [2024-11-18 23:55:58.768108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.094 [2024-11-18 23:55:58.768115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.094 [2024-11-18 23:55:58.768137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.094 [2024-11-18 23:55:58.768145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.094 [2024-11-18 23:55:58.768155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.094 [2024-11-18 23:55:58.768161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.354 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:52.354 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.354 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.354 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.354 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.355 23:55:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.355 23:55:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.355 23:55:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.355 23:55:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.615 23:55:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:52.615 23:55:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.615 23:55:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.73 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.73 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.73 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.73 2 00:12:04.851 remove_attach_helper took 45.73s to complete (handling 2 nvme drive(s)) 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:04.851 23:56:11 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67212 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67212 ']' 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67212 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67212 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.851 killing process with pid 67212 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67212' 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67212 00:12:04.851 23:56:11 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67212 00:12:05.794 23:56:12 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:06.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:06.316 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:06.316 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:06.577 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.577 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.577 00:12:06.577 real 2m29.906s 00:12:06.577 user 1m51.807s 00:12:06.577 sys 0m16.712s 00:12:06.577 23:56:13 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.577 ************************************ 00:12:06.577 23:56:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 END TEST sw_hotplug 00:12:06.577 ************************************ 00:12:06.577 23:56:13 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:06.577 23:56:13 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:06.577 23:56:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.577 23:56:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.577 23:56:13 -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 ************************************ 00:12:06.577 START TEST nvme_xnvme 00:12:06.577 ************************************ 00:12:06.577 23:56:13 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:06.578 * Looking for test storage... 00:12:06.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.578 23:56:13 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:06.578 23:56:13 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:06.578 23:56:13 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:06.839 23:56:13 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.839 23:56:13 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:06.840 23:56:13 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.840 23:56:13 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:06.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.840 --rc genhtml_branch_coverage=1 00:12:06.840 --rc genhtml_function_coverage=1 00:12:06.840 --rc genhtml_legend=1 00:12:06.840 --rc geninfo_all_blocks=1 00:12:06.840 --rc geninfo_unexecuted_blocks=1 00:12:06.840 00:12:06.840 ' 00:12:06.840 23:56:13 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:06.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.840 --rc genhtml_branch_coverage=1 00:12:06.840 --rc genhtml_function_coverage=1 00:12:06.840 --rc genhtml_legend=1 00:12:06.840 --rc geninfo_all_blocks=1 00:12:06.840 --rc geninfo_unexecuted_blocks=1 00:12:06.840 00:12:06.840 ' 00:12:06.840 23:56:13 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:06.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.840 --rc genhtml_branch_coverage=1 00:12:06.840 --rc genhtml_function_coverage=1 00:12:06.840 --rc genhtml_legend=1 00:12:06.840 --rc geninfo_all_blocks=1 00:12:06.840 --rc geninfo_unexecuted_blocks=1 00:12:06.840 00:12:06.840 ' 00:12:06.840 23:56:13 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:06.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.840 --rc genhtml_branch_coverage=1 00:12:06.840 --rc genhtml_function_coverage=1 00:12:06.840 --rc genhtml_legend=1 00:12:06.840 --rc geninfo_all_blocks=1 00:12:06.840 --rc geninfo_unexecuted_blocks=1 00:12:06.840 00:12:06.840 ' 00:12:06.840 23:56:13 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.840 23:56:13 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.840 23:56:13 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.840 23:56:13 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.840 23:56:13 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.840 23:56:13 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.840 23:56:13 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.840 23:56:13 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.840 23:56:13 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:06.840 23:56:13 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.840 23:56:13 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:06.840 23:56:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.840 23:56:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.840 23:56:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:06.840 ************************************ 00:12:06.840 START TEST xnvme_to_malloc_dd_copy 00:12:06.840 ************************************ 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:06.840 23:56:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:06.840 { 00:12:06.840 "subsystems": [ 00:12:06.840 { 00:12:06.840 "subsystem": "bdev", 00:12:06.840 "config": [ 00:12:06.840 { 00:12:06.840 "params": { 00:12:06.840 "block_size": 512, 00:12:06.840 "num_blocks": 2097152, 00:12:06.840 "name": "malloc0" 00:12:06.840 }, 00:12:06.840 "method": "bdev_malloc_create" 00:12:06.840 }, 00:12:06.840 { 00:12:06.840 "params": { 00:12:06.840 "io_mechanism": "libaio", 00:12:06.840 "filename": "/dev/nullb0", 00:12:06.840 "name": "null0" 00:12:06.840 }, 00:12:06.840 "method": "bdev_xnvme_create" 00:12:06.840 }, 00:12:06.840 { 00:12:06.840 "method": "bdev_wait_for_examine" 00:12:06.840 } 00:12:06.840 ] 00:12:06.840 } 00:12:06.840 ] 00:12:06.840 } 00:12:06.840 [2024-11-18 23:56:13.399381] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:06.840 [2024-11-18 23:56:13.399462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68595 ] 00:12:07.102 [2024-11-18 23:56:13.552419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.102 [2024-11-18 23:56:13.631108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.020  [2024-11-18T23:56:16.656Z] Copying: 301/1024 [MB] (301 MBps) [2024-11-18T23:56:17.599Z] Copying: 602/1024 [MB] (301 MBps) [2024-11-18T23:56:17.860Z] Copying: 904/1024 [MB] (302 MBps) [2024-11-18T23:56:19.775Z] Copying: 1024/1024 [MB] (average 301 MBps) 00:12:13.083 00:12:13.083 23:56:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:13.083 23:56:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:13.083 23:56:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:13.083 23:56:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:13.083 { 00:12:13.083 "subsystems": [ 00:12:13.083 { 00:12:13.083 "subsystem": "bdev", 00:12:13.083 "config": [ 00:12:13.083 { 00:12:13.083 "params": { 00:12:13.083 "block_size": 512, 00:12:13.083 "num_blocks": 2097152, 00:12:13.083 "name": "malloc0" 00:12:13.083 }, 00:12:13.083 "method": "bdev_malloc_create" 00:12:13.083 }, 00:12:13.083 { 00:12:13.083 "params": { 00:12:13.083 "io_mechanism": "libaio", 00:12:13.083 "filename": "/dev/nullb0", 00:12:13.083 "name": "null0" 00:12:13.083 }, 00:12:13.083 "method": "bdev_xnvme_create" 00:12:13.083 }, 00:12:13.083 { 00:12:13.083 "method": "bdev_wait_for_examine" 00:12:13.083 } 00:12:13.083 ] 00:12:13.083 } 00:12:13.083 ] 00:12:13.083 } 00:12:13.344 [2024-11-18 23:56:19.776586] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:13.344 [2024-11-18 23:56:19.776699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68673 ] 00:12:13.344 [2024-11-18 23:56:19.932629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.344 [2024-11-18 23:56:20.016088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.260  [2024-11-18T23:56:22.893Z] Copying: 303/1024 [MB] (303 MBps) [2024-11-18T23:56:23.834Z] Copying: 608/1024 [MB] (304 MBps) [2024-11-18T23:56:24.403Z] Copying: 913/1024 [MB] (304 MBps) [2024-11-18T23:56:26.331Z] Copying: 1024/1024 [MB] (average 304 MBps) 00:12:19.639 00:12:19.639 23:56:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:19.639 23:56:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:19.639 23:56:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:19.639 23:56:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:19.639 23:56:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:19.639 23:56:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:19.639 { 00:12:19.639 "subsystems": [ 00:12:19.639 { 00:12:19.639 "subsystem": "bdev", 00:12:19.639 "config": [ 00:12:19.639 { 00:12:19.639 "params": { 00:12:19.639 "block_size": 512, 00:12:19.639 "num_blocks": 2097152, 00:12:19.639 "name": "malloc0" 00:12:19.639 }, 00:12:19.639 "method": "bdev_malloc_create" 00:12:19.639 }, 00:12:19.639 { 00:12:19.639 "params": { 00:12:19.639 "io_mechanism": "io_uring", 00:12:19.639 "filename": "/dev/nullb0", 00:12:19.639 "name": "null0" 00:12:19.639 }, 00:12:19.639 "method": "bdev_xnvme_create" 00:12:19.639 }, 00:12:19.639 { 00:12:19.639 "method": "bdev_wait_for_examine" 00:12:19.639 } 00:12:19.639 ] 00:12:19.639 } 00:12:19.639 ] 00:12:19.639 } 00:12:19.639 [2024-11-18 23:56:26.082738] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:19.639 [2024-11-18 23:56:26.082851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68749 ] 00:12:19.639 [2024-11-18 23:56:26.236799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.639 [2024-11-18 23:56:26.318560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.557  [2024-11-18T23:56:29.191Z] Copying: 311/1024 [MB] (311 MBps) [2024-11-18T23:56:30.133Z] Copying: 623/1024 [MB] (311 MBps) [2024-11-18T23:56:30.395Z] Copying: 935/1024 [MB] (311 MBps) [2024-11-18T23:56:32.310Z] Copying: 1024/1024 [MB] (average 311 MBps) 00:12:25.618 00:12:25.618 23:56:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:25.618 23:56:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:25.618 23:56:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:25.618 23:56:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:25.618 { 00:12:25.618 "subsystems": [ 00:12:25.618 { 00:12:25.618 "subsystem": "bdev", 00:12:25.618 "config": [ 00:12:25.618 { 00:12:25.618 "params": { 00:12:25.619 "block_size": 512, 00:12:25.619 "num_blocks": 2097152, 00:12:25.619 "name": "malloc0" 00:12:25.619 }, 00:12:25.619 "method": "bdev_malloc_create" 00:12:25.619 }, 00:12:25.619 { 00:12:25.619 "params": { 00:12:25.619 "io_mechanism": "io_uring", 00:12:25.619 "filename": "/dev/nullb0", 00:12:25.619 "name": "null0" 00:12:25.619 }, 00:12:25.619 "method": "bdev_xnvme_create" 00:12:25.619 }, 00:12:25.619 { 00:12:25.619 "method": "bdev_wait_for_examine" 00:12:25.619 } 00:12:25.619 ] 00:12:25.619 } 00:12:25.619 ] 00:12:25.619 } 00:12:25.619 [2024-11-18 23:56:32.303727] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:25.619 [2024-11-18 23:56:32.303816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68829 ] 00:12:25.879 [2024-11-18 23:56:32.452800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.879 [2024-11-18 23:56:32.531593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.796  [2024-11-18T23:56:35.431Z] Copying: 316/1024 [MB] (316 MBps) [2024-11-18T23:56:36.374Z] Copying: 633/1024 [MB] (316 MBps) [2024-11-18T23:56:36.636Z] Copying: 950/1024 [MB] (316 MBps) [2024-11-18T23:56:38.552Z] Copying: 1024/1024 [MB] (average 316 MBps) 00:12:31.860 00:12:31.860 23:56:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:31.860 23:56:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:31.860 00:12:31.860 real 0m25.141s 00:12:31.860 user 0m22.094s 00:12:31.860 sys 0m2.514s 00:12:31.860 23:56:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.860 23:56:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:31.860 ************************************ 00:12:31.860 END TEST xnvme_to_malloc_dd_copy 00:12:31.860 ************************************ 00:12:31.860 23:56:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:31.860 23:56:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.860 23:56:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.860 23:56:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.860 ************************************ 00:12:31.860 START TEST xnvme_bdevperf 00:12:31.860 ************************************ 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.860 23:56:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:32.121 { 00:12:32.121 "subsystems": [ 00:12:32.121 { 00:12:32.121 "subsystem": "bdev", 00:12:32.121 "config": [ 00:12:32.121 { 00:12:32.121 "params": { 00:12:32.121 "io_mechanism": "libaio", 00:12:32.121 "filename": "/dev/nullb0", 00:12:32.121 "name": "null0" 00:12:32.121 }, 00:12:32.121 "method": "bdev_xnvme_create" 00:12:32.121 }, 00:12:32.121 { 00:12:32.121 "method": "bdev_wait_for_examine" 00:12:32.121 } 00:12:32.121 ] 00:12:32.121 } 00:12:32.121 ] 00:12:32.121 } 00:12:32.121 [2024-11-18 23:56:38.605372] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:32.121 [2024-11-18 23:56:38.605479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68932 ] 00:12:32.122 [2024-11-18 23:56:38.760885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.383 [2024-11-18 23:56:38.840780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.383 Running I/O for 5 seconds... 00:12:34.715 199680.00 IOPS, 780.00 MiB/s [2024-11-18T23:56:42.414Z] 199552.00 IOPS, 779.50 MiB/s [2024-11-18T23:56:43.358Z] 199786.67 IOPS, 780.42 MiB/s [2024-11-18T23:56:44.301Z] 199968.00 IOPS, 781.12 MiB/s 00:12:37.609 Latency(us) 00:12:37.609 [2024-11-18T23:56:44.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.609 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:37.609 null0 : 5.00 200035.59 781.39 0.00 0.00 317.67 110.28 1638.40 00:12:37.609 [2024-11-18T23:56:44.301Z] =================================================================================================================== 00:12:37.609 [2024-11-18T23:56:44.301Z] Total : 200035.59 781.39 0.00 0.00 317.67 110.28 1638.40 00:12:38.181 23:56:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:38.181 23:56:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:38.181 23:56:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:38.181 23:56:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:38.181 23:56:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:38.181 23:56:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:38.181 { 00:12:38.181 "subsystems": [ 00:12:38.181 { 00:12:38.181 "subsystem": "bdev", 00:12:38.181 "config": [ 00:12:38.181 { 00:12:38.181 "params": { 00:12:38.181 "io_mechanism": "io_uring", 00:12:38.181 "filename": "/dev/nullb0", 00:12:38.181 "name": "null0" 00:12:38.181 }, 00:12:38.181 "method": "bdev_xnvme_create" 00:12:38.181 }, 00:12:38.181 { 00:12:38.181 "method": "bdev_wait_for_examine" 00:12:38.181 } 00:12:38.181 ] 00:12:38.181 } 00:12:38.181 ] 00:12:38.181 } 00:12:38.181 [2024-11-18 23:56:44.665969] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:38.181 [2024-11-18 23:56:44.666082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69002 ] 00:12:38.181 [2024-11-18 23:56:44.820843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.442 [2024-11-18 23:56:44.896199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.442 Running I/O for 5 seconds... 00:12:40.399 231488.00 IOPS, 904.25 MiB/s [2024-11-18T23:56:48.474Z] 231328.00 IOPS, 903.62 MiB/s [2024-11-18T23:56:49.414Z] 231189.33 IOPS, 903.08 MiB/s [2024-11-18T23:56:50.355Z] 231184.00 IOPS, 903.06 MiB/s 00:12:43.663 Latency(us) 00:12:43.663 [2024-11-18T23:56:50.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.663 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:43.663 null0 : 5.00 230699.42 901.17 0.00 0.00 274.98 148.09 1569.08 00:12:43.663 [2024-11-18T23:56:50.355Z] =================================================================================================================== 00:12:43.663 [2024-11-18T23:56:50.355Z] Total : 230699.42 901.17 0.00 0.00 274.98 148.09 1569.08 00:12:44.235 23:56:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:44.235 23:56:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:44.235 00:12:44.235 real 0m12.129s 00:12:44.235 user 0m9.754s 00:12:44.235 sys 0m2.141s 00:12:44.235 23:56:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.235 23:56:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:44.235 ************************************ 00:12:44.235 END TEST xnvme_bdevperf 00:12:44.235 ************************************ 00:12:44.235 ************************************ 00:12:44.235 00:12:44.235 real 0m37.507s 00:12:44.235 user 0m31.955s 00:12:44.235 sys 0m4.763s 00:12:44.235 23:56:50 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.235 23:56:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.235 END TEST nvme_xnvme 00:12:44.235 ************************************ 00:12:44.235 23:56:50 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:44.235 23:56:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.235 23:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.235 23:56:50 -- common/autotest_common.sh@10 -- # set +x 00:12:44.235 ************************************ 00:12:44.235 START TEST blockdev_xnvme 00:12:44.235 ************************************ 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:44.235 * Looking for test storage... 00:12:44.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.235 23:56:50 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.235 --rc genhtml_branch_coverage=1 00:12:44.235 --rc genhtml_function_coverage=1 00:12:44.235 --rc genhtml_legend=1 00:12:44.235 --rc geninfo_all_blocks=1 00:12:44.235 --rc geninfo_unexecuted_blocks=1 00:12:44.235 00:12:44.235 ' 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.235 --rc genhtml_branch_coverage=1 00:12:44.235 --rc genhtml_function_coverage=1 00:12:44.235 --rc genhtml_legend=1 00:12:44.235 --rc geninfo_all_blocks=1 00:12:44.235 --rc geninfo_unexecuted_blocks=1 00:12:44.235 00:12:44.235 ' 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.235 --rc genhtml_branch_coverage=1 00:12:44.235 --rc genhtml_function_coverage=1 00:12:44.235 --rc genhtml_legend=1 00:12:44.235 --rc geninfo_all_blocks=1 00:12:44.235 --rc geninfo_unexecuted_blocks=1 00:12:44.235 00:12:44.235 ' 00:12:44.235 23:56:50 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.235 --rc genhtml_branch_coverage=1 00:12:44.235 --rc genhtml_function_coverage=1 00:12:44.235 --rc genhtml_legend=1 00:12:44.235 --rc geninfo_all_blocks=1 00:12:44.235 --rc geninfo_unexecuted_blocks=1 00:12:44.235 00:12:44.235 ' 00:12:44.235 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:44.235 23:56:50 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:44.235 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:44.235 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:44.235 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69144 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69144 00:12:44.236 23:56:50 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 69144 ']' 00:12:44.236 23:56:50 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:44.236 23:56:50 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.236 23:56:50 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.236 23:56:50 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.236 23:56:50 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.236 23:56:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.497 [2024-11-18 23:56:50.960453] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:44.497 [2024-11-18 23:56:50.960564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69144 ] 00:12:44.497 [2024-11-18 23:56:51.111698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.757 [2024-11-18 23:56:51.186647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.329 23:56:51 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.329 23:56:51 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:12:45.329 23:56:51 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:45.329 23:56:51 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:45.329 23:56:51 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:45.329 23:56:51 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:45.329 23:56:51 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:45.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:45.590 Waiting for block devices as requested 00:12:45.590 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.590 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.849 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.849 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.134 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:51.134 nvme0n1 00:12:51.134 nvme1n1 00:12:51.134 nvme2n1 00:12:51.134 nvme2n2 00:12:51.134 nvme2n3 00:12:51.134 nvme3n1 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.134 23:56:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:51.134 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:51.135 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e6208c4a-de4f-4abe-9d1e-11d53988130b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e6208c4a-de4f-4abe-9d1e-11d53988130b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2e5dc4d1-17f4-4a83-bbda-4a9ee6336d03"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2e5dc4d1-17f4-4a83-bbda-4a9ee6336d03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "3eda4a3e-5d39-4c3f-9089-45c2b77278c8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3eda4a3e-5d39-4c3f-9089-45c2b77278c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "a942bd91-f4d2-42df-90bc-d111a079807f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a942bd91-f4d2-42df-90bc-d111a079807f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c63c382a-0177-48cd-a115-7ddd303d9311"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c63c382a-0177-48cd-a115-7ddd303d9311",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e8d4ff22-fd37-49e9-8816-3909bce2f1b3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e8d4ff22-fd37-49e9-8816-3909bce2f1b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:51.135 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:51.135 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:51.135 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:51.135 23:56:57 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69144 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 69144 ']' 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 69144 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69144 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.135 killing process with pid 69144 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69144' 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 69144 00:12:51.135 23:56:57 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 69144 00:12:52.518 23:56:58 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:52.518 23:56:58 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:52.518 23:56:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:52.518 23:56:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.518 23:56:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.518 ************************************ 00:12:52.518 START TEST bdev_hello_world 00:12:52.518 ************************************ 00:12:52.518 23:56:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:52.518 [2024-11-18 23:56:58.858773] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:52.518 [2024-11-18 23:56:58.858904] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69492 ] 00:12:52.518 [2024-11-18 23:56:59.013282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.518 [2024-11-18 23:56:59.096608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.777 [2024-11-18 23:56:59.378326] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:52.777 [2024-11-18 23:56:59.378360] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:52.777 [2024-11-18 23:56:59.378372] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:52.777 [2024-11-18 23:56:59.379831] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:52.777 [2024-11-18 23:56:59.380159] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:52.777 [2024-11-18 23:56:59.380177] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:52.777 [2024-11-18 23:56:59.380390] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:52.777 00:12:52.777 [2024-11-18 23:56:59.380407] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:53.345 00:12:53.345 real 0m1.130s 00:12:53.345 user 0m0.854s 00:12:53.345 sys 0m0.165s 00:12:53.345 23:56:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.345 ************************************ 00:12:53.345 23:56:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:53.345 END TEST bdev_hello_world 00:12:53.345 ************************************ 00:12:53.345 23:56:59 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:53.345 23:56:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:53.345 23:56:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.345 23:56:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.345 ************************************ 00:12:53.345 START TEST bdev_bounds 00:12:53.345 ************************************ 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69530 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:53.345 Process bdevio pid: 69530 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69530' 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69530 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 69530 ']' 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:53.345 23:56:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:53.603 [2024-11-18 23:57:00.038038] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:53.603 [2024-11-18 23:57:00.038171] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69530 ] 00:12:53.603 [2024-11-18 23:57:00.193679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.603 [2024-11-18 23:57:00.282213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.603 [2024-11-18 23:57:00.282352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.603 [2024-11-18 23:57:00.282464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.543 23:57:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.543 23:57:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:54.543 23:57:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:54.543 I/O targets: 00:12:54.543 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:54.543 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:54.543 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.543 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.543 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.543 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:54.543 00:12:54.543 00:12:54.543 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.543 http://cunit.sourceforge.net/ 00:12:54.543 00:12:54.543 00:12:54.543 Suite: bdevio tests on: nvme3n1 00:12:54.543 Test: blockdev write read block ...passed 00:12:54.543 Test: blockdev write zeroes read block ...passed 00:12:54.543 Test: blockdev write zeroes read no split ...passed 00:12:54.543 Test: blockdev write zeroes read split ...passed 00:12:54.543 Test: blockdev write zeroes read split partial ...passed 00:12:54.543 Test: blockdev reset ...passed 00:12:54.543 Test: blockdev write read 8 blocks ...passed 00:12:54.543 Test: blockdev write read size > 128k ...passed 00:12:54.543 Test: blockdev write read invalid size ...passed 00:12:54.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.543 Test: blockdev write read max offset ...passed 00:12:54.543 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.543 Test: blockdev writev readv 8 blocks ...passed 00:12:54.543 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.543 Test: blockdev writev readv block ...passed 00:12:54.543 Test: blockdev writev readv size > 128k ...passed 00:12:54.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.543 Test: blockdev comparev and writev ...passed 00:12:54.543 Test: blockdev nvme passthru rw ...passed 00:12:54.543 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.543 Test: blockdev nvme admin passthru ...passed 00:12:54.543 Test: blockdev copy ...passed 00:12:54.543 Suite: bdevio tests on: nvme2n3 00:12:54.543 Test: blockdev write read block ...passed 00:12:54.543 Test: blockdev write zeroes read block ...passed 00:12:54.543 Test: blockdev write zeroes read no split ...passed 00:12:54.543 Test: blockdev write zeroes read split ...passed 00:12:54.543 Test: blockdev write zeroes read split partial ...passed 00:12:54.543 Test: blockdev reset ...passed 00:12:54.543 Test: blockdev write read 8 blocks ...passed 00:12:54.543 Test: blockdev write read size > 128k ...passed 00:12:54.543 Test: blockdev write read invalid size ...passed 00:12:54.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.543 Test: blockdev write read max offset ...passed 00:12:54.543 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.543 Test: blockdev writev readv 8 blocks ...passed 00:12:54.543 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.543 Test: blockdev writev readv block ...passed 00:12:54.543 Test: blockdev writev readv size > 128k ...passed 00:12:54.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.543 Test: blockdev comparev and writev ...passed 00:12:54.543 Test: blockdev nvme passthru rw ...passed 00:12:54.543 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.543 Test: blockdev nvme admin passthru ...passed 00:12:54.543 Test: blockdev copy ...passed 00:12:54.543 Suite: bdevio tests on: nvme2n2 00:12:54.543 Test: blockdev write read block ...passed 00:12:54.543 Test: blockdev write zeroes read block ...passed 00:12:54.543 Test: blockdev write zeroes read no split ...passed 00:12:54.543 Test: blockdev write zeroes read split ...passed 00:12:54.543 Test: blockdev write zeroes read split partial ...passed 00:12:54.543 Test: blockdev reset ...passed 00:12:54.543 Test: blockdev write read 8 blocks ...passed 00:12:54.543 Test: blockdev write read size > 128k ...passed 00:12:54.543 Test: blockdev write read invalid size ...passed 00:12:54.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.543 Test: blockdev write read max offset ...passed 00:12:54.543 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.543 Test: blockdev writev readv 8 blocks ...passed 00:12:54.543 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.543 Test: blockdev writev readv block ...passed 00:12:54.543 Test: blockdev writev readv size > 128k ...passed 00:12:54.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.544 Test: blockdev comparev and writev ...passed 00:12:54.544 Test: blockdev nvme passthru rw ...passed 00:12:54.544 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.544 Test: blockdev nvme admin passthru ...passed 00:12:54.544 Test: blockdev copy ...passed 00:12:54.544 Suite: bdevio tests on: nvme2n1 00:12:54.544 Test: blockdev write read block ...passed 00:12:54.544 Test: blockdev write zeroes read block ...passed 00:12:54.544 Test: blockdev write zeroes read no split ...passed 00:12:54.544 Test: blockdev write zeroes read split ...passed 00:12:54.544 Test: blockdev write zeroes read split partial ...passed 00:12:54.544 Test: blockdev reset ...passed 00:12:54.544 Test: blockdev write read 8 blocks ...passed 00:12:54.544 Test: blockdev write read size > 128k ...passed 00:12:54.544 Test: blockdev write read invalid size ...passed 00:12:54.544 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.544 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.544 Test: blockdev write read max offset ...passed 00:12:54.544 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.544 Test: blockdev writev readv 8 blocks ...passed 00:12:54.544 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.804 Test: blockdev writev readv block ...passed 00:12:54.804 Test: blockdev writev readv size > 128k ...passed 00:12:54.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.804 Test: blockdev comparev and writev ...passed 00:12:54.804 Test: blockdev nvme passthru rw ...passed 00:12:54.804 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.804 Test: blockdev nvme admin passthru ...passed 00:12:54.804 Test: blockdev copy ...passed 00:12:54.804 Suite: bdevio tests on: nvme1n1 00:12:54.804 Test: blockdev write read block ...passed 00:12:54.804 Test: blockdev write zeroes read block ...passed 00:12:54.804 Test: blockdev write zeroes read no split ...passed 00:12:54.804 Test: blockdev write zeroes read split ...passed 00:12:54.804 Test: blockdev write zeroes read split partial ...passed 00:12:54.804 Test: blockdev reset ...passed 00:12:54.804 Test: blockdev write read 8 blocks ...passed 00:12:54.804 Test: blockdev write read size > 128k ...passed 00:12:54.804 Test: blockdev write read invalid size ...passed 00:12:54.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.804 Test: blockdev write read max offset ...passed 00:12:54.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.804 Test: blockdev writev readv 8 blocks ...passed 00:12:54.804 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.804 Test: blockdev writev readv block ...passed 00:12:54.804 Test: blockdev writev readv size > 128k ...passed 00:12:54.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.804 Test: blockdev comparev and writev ...passed 00:12:54.804 Test: blockdev nvme passthru rw ...passed 00:12:54.804 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.804 Test: blockdev nvme admin passthru ...passed 00:12:54.804 Test: blockdev copy ...passed 00:12:54.804 Suite: bdevio tests on: nvme0n1 00:12:54.804 Test: blockdev write read block ...passed 00:12:54.804 Test: blockdev write zeroes read block ...passed 00:12:54.804 Test: blockdev write zeroes read no split ...passed 00:12:54.804 Test: blockdev write zeroes read split ...passed 00:12:54.804 Test: blockdev write zeroes read split partial ...passed 00:12:54.804 Test: blockdev reset ...passed 00:12:54.804 Test: blockdev write read 8 blocks ...passed 00:12:54.804 Test: blockdev write read size > 128k ...passed 00:12:54.804 Test: blockdev write read invalid size ...passed 00:12:54.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.804 Test: blockdev write read max offset ...passed 00:12:54.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.804 Test: blockdev writev readv 8 blocks ...passed 00:12:54.804 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.804 Test: blockdev writev readv block ...passed 00:12:54.804 Test: blockdev writev readv size > 128k ...passed 00:12:54.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.804 Test: blockdev comparev and writev ...passed 00:12:54.804 Test: blockdev nvme passthru rw ...passed 00:12:54.804 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.804 Test: blockdev nvme admin passthru ...passed 00:12:54.804 Test: blockdev copy ...passed 00:12:54.804 00:12:54.804 Run Summary: Type Total Ran Passed Failed Inactive 00:12:54.804 suites 6 6 n/a 0 0 00:12:54.804 tests 138 138 138 0 0 00:12:54.804 asserts 780 780 780 0 n/a 00:12:54.804 00:12:54.804 Elapsed time = 1.070 seconds 00:12:54.804 0 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69530 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 69530 ']' 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 69530 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69530 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.804 killing process with pid 69530 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69530' 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 69530 00:12:54.804 23:57:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 69530 00:12:55.744 23:57:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:55.744 00:12:55.744 real 0m2.144s 00:12:55.744 user 0m5.379s 00:12:55.744 sys 0m0.291s 00:12:55.744 23:57:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.744 23:57:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:55.744 ************************************ 00:12:55.744 END TEST bdev_bounds 00:12:55.744 ************************************ 00:12:55.744 23:57:02 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:55.744 23:57:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:55.744 23:57:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.744 23:57:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.744 ************************************ 00:12:55.744 START TEST bdev_nbd 00:12:55.744 ************************************ 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69585 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69585 /var/tmp/spdk-nbd.sock 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 69585 ']' 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.744 23:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:55.744 [2024-11-18 23:57:02.251758] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:55.744 [2024-11-18 23:57:02.251871] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.744 [2024-11-18 23:57:02.410499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.004 [2024-11-18 23:57:02.531266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.574 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.834 1+0 records in 00:12:56.834 1+0 records out 00:12:56.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115625 s, 3.5 MB/s 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.834 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.095 1+0 records in 00:12:57.095 1+0 records out 00:12:57.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159122 s, 2.6 MB/s 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.095 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.355 1+0 records in 00:12:57.355 1+0 records out 00:12:57.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00157057 s, 2.6 MB/s 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.355 23:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.615 1+0 records in 00:12:57.615 1+0 records out 00:12:57.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000975137 s, 4.2 MB/s 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.615 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.616 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.616 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.876 1+0 records in 00:12:57.876 1+0 records out 00:12:57.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010287 s, 4.0 MB/s 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.876 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.135 1+0 records in 00:12:58.135 1+0 records out 00:12:58.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104585 s, 3.9 MB/s 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:58.135 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:58.395 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:58.395 { 00:12:58.395 "nbd_device": "/dev/nbd0", 00:12:58.395 "bdev_name": "nvme0n1" 00:12:58.395 }, 00:12:58.395 { 00:12:58.395 "nbd_device": "/dev/nbd1", 00:12:58.395 "bdev_name": "nvme1n1" 00:12:58.395 }, 00:12:58.395 { 00:12:58.395 "nbd_device": "/dev/nbd2", 00:12:58.395 "bdev_name": "nvme2n1" 00:12:58.395 }, 00:12:58.395 { 00:12:58.395 "nbd_device": "/dev/nbd3", 00:12:58.395 "bdev_name": "nvme2n2" 00:12:58.395 }, 00:12:58.395 { 00:12:58.395 "nbd_device": "/dev/nbd4", 00:12:58.395 "bdev_name": "nvme2n3" 00:12:58.395 }, 00:12:58.395 { 00:12:58.396 "nbd_device": "/dev/nbd5", 00:12:58.396 "bdev_name": "nvme3n1" 00:12:58.396 } 00:12:58.396 ]' 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:58.396 { 00:12:58.396 "nbd_device": "/dev/nbd0", 00:12:58.396 "bdev_name": "nvme0n1" 00:12:58.396 }, 00:12:58.396 { 00:12:58.396 "nbd_device": "/dev/nbd1", 00:12:58.396 "bdev_name": "nvme1n1" 00:12:58.396 }, 00:12:58.396 { 00:12:58.396 "nbd_device": "/dev/nbd2", 00:12:58.396 "bdev_name": "nvme2n1" 00:12:58.396 }, 00:12:58.396 { 00:12:58.396 "nbd_device": "/dev/nbd3", 00:12:58.396 "bdev_name": "nvme2n2" 00:12:58.396 }, 00:12:58.396 { 00:12:58.396 "nbd_device": "/dev/nbd4", 00:12:58.396 "bdev_name": "nvme2n3" 00:12:58.396 }, 00:12:58.396 { 00:12:58.396 "nbd_device": "/dev/nbd5", 00:12:58.396 "bdev_name": "nvme3n1" 00:12:58.396 } 00:12:58.396 ]' 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.396 23:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.656 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.916 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.177 23:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.438 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.698 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.959 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:00.220 /dev/nbd0 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.220 1+0 records in 00:13:00.220 1+0 records out 00:13:00.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401912 s, 10.2 MB/s 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.220 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:00.480 /dev/nbd1 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.480 1+0 records in 00:13:00.480 1+0 records out 00:13:00.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108002 s, 3.8 MB/s 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.480 23:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:00.480 /dev/nbd10 00:13:00.480 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:00.480 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:00.480 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:00.480 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.480 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.480 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.480 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.740 1+0 records in 00:13:00.740 1+0 records out 00:13:00.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111193 s, 3.7 MB/s 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:00.740 /dev/nbd11 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.740 1+0 records in 00:13:00.740 1+0 records out 00:13:00.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111173 s, 3.7 MB/s 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.740 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:01.000 /dev/nbd12 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.000 1+0 records in 00:13:01.000 1+0 records out 00:13:01.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741181 s, 5.5 MB/s 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:01.000 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:01.261 /dev/nbd13 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.261 1+0 records in 00:13:01.261 1+0 records out 00:13:01.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117968 s, 3.5 MB/s 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.261 23:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:01.521 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:01.521 { 00:13:01.521 "nbd_device": "/dev/nbd0", 00:13:01.521 "bdev_name": "nvme0n1" 00:13:01.521 }, 00:13:01.521 { 00:13:01.521 "nbd_device": "/dev/nbd1", 00:13:01.521 "bdev_name": "nvme1n1" 00:13:01.521 }, 00:13:01.521 { 00:13:01.521 "nbd_device": "/dev/nbd10", 00:13:01.521 "bdev_name": "nvme2n1" 00:13:01.521 }, 00:13:01.521 { 00:13:01.521 "nbd_device": "/dev/nbd11", 00:13:01.521 "bdev_name": "nvme2n2" 00:13:01.521 }, 00:13:01.521 { 00:13:01.521 "nbd_device": "/dev/nbd12", 00:13:01.521 "bdev_name": "nvme2n3" 00:13:01.521 }, 00:13:01.521 { 00:13:01.521 "nbd_device": "/dev/nbd13", 00:13:01.521 "bdev_name": "nvme3n1" 00:13:01.521 } 00:13:01.521 ]' 00:13:01.521 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:01.521 { 00:13:01.521 "nbd_device": "/dev/nbd0", 00:13:01.521 "bdev_name": "nvme0n1" 00:13:01.521 }, 00:13:01.521 { 00:13:01.521 "nbd_device": "/dev/nbd1", 00:13:01.522 "bdev_name": "nvme1n1" 00:13:01.522 }, 00:13:01.522 { 00:13:01.522 "nbd_device": "/dev/nbd10", 00:13:01.522 "bdev_name": "nvme2n1" 00:13:01.522 }, 00:13:01.522 { 00:13:01.522 "nbd_device": "/dev/nbd11", 00:13:01.522 "bdev_name": "nvme2n2" 00:13:01.522 }, 00:13:01.522 { 00:13:01.522 "nbd_device": "/dev/nbd12", 00:13:01.522 "bdev_name": "nvme2n3" 00:13:01.522 }, 00:13:01.522 { 00:13:01.522 "nbd_device": "/dev/nbd13", 00:13:01.522 "bdev_name": "nvme3n1" 00:13:01.522 } 00:13:01.522 ]' 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:01.522 /dev/nbd1 00:13:01.522 /dev/nbd10 00:13:01.522 /dev/nbd11 00:13:01.522 /dev/nbd12 00:13:01.522 /dev/nbd13' 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:01.522 /dev/nbd1 00:13:01.522 /dev/nbd10 00:13:01.522 /dev/nbd11 00:13:01.522 /dev/nbd12 00:13:01.522 /dev/nbd13' 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:01.522 256+0 records in 00:13:01.522 256+0 records out 00:13:01.522 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00805773 s, 130 MB/s 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.522 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:01.783 256+0 records in 00:13:01.783 256+0 records out 00:13:01.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234678 s, 4.5 MB/s 00:13:01.783 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.783 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:02.043 256+0 records in 00:13:02.043 256+0 records out 00:13:02.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233719 s, 4.5 MB/s 00:13:02.043 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.043 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:02.304 256+0 records in 00:13:02.304 256+0 records out 00:13:02.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196795 s, 5.3 MB/s 00:13:02.304 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.304 23:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:02.564 256+0 records in 00:13:02.564 256+0 records out 00:13:02.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172478 s, 6.1 MB/s 00:13:02.564 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.564 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:02.825 256+0 records in 00:13:02.825 256+0 records out 00:13:02.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241413 s, 4.3 MB/s 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:02.825 256+0 records in 00:13:02.825 256+0 records out 00:13:02.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.077273 s, 13.6 MB/s 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:02.825 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:02.826 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.826 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:02.826 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.826 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:02.826 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.826 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.086 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.346 23:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:03.346 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:03.605 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:03.605 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.606 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.866 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.127 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:04.398 23:57:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:04.703 malloc_lvol_verify 00:13:04.703 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:04.703 0706b920-83cc-49c1-b4bb-93bd37d4ee17 00:13:04.703 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:04.963 e7a3b502-3bae-418d-ad0b-c2af1d87a4bf 00:13:04.963 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:05.222 /dev/nbd0 00:13:05.222 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:05.222 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:05.222 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:05.222 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:05.222 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:05.222 mke2fs 1.47.0 (5-Feb-2023) 00:13:05.222 Discarding device blocks: 0/4096 done 00:13:05.222 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:05.222 00:13:05.223 Allocating group tables: 0/1 done 00:13:05.223 Writing inode tables: 0/1 done 00:13:05.223 Creating journal (1024 blocks): done 00:13:05.223 Writing superblocks and filesystem accounting information: 0/1 done 00:13:05.223 00:13:05.223 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:05.223 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:05.223 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:05.223 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:05.223 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:05.223 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.223 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:05.483 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:05.483 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:05.483 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:05.483 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69585 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 69585 ']' 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 69585 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69585 00:13:05.484 killing process with pid 69585 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69585' 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 69585 00:13:05.484 23:57:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 69585 00:13:06.057 23:57:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:06.057 00:13:06.057 real 0m10.376s 00:13:06.057 user 0m14.183s 00:13:06.057 sys 0m3.630s 00:13:06.057 ************************************ 00:13:06.057 END TEST bdev_nbd 00:13:06.057 ************************************ 00:13:06.057 23:57:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.057 23:57:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:06.057 23:57:12 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:06.057 23:57:12 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:06.057 23:57:12 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:06.057 23:57:12 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:06.057 23:57:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.057 23:57:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.057 23:57:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.057 ************************************ 00:13:06.057 START TEST bdev_fio 00:13:06.057 ************************************ 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:06.057 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.057 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:06.058 ************************************ 00:13:06.058 START TEST bdev_fio_rw_verify 00:13:06.058 ************************************ 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:06.058 23:57:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:06.319 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.319 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.319 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.319 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.319 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.319 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.319 fio-3.35 00:13:06.319 Starting 6 threads 00:13:18.557 00:13:18.557 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69996: Mon Nov 18 23:57:23 2024 00:13:18.557 read: IOPS=19.2k, BW=75.0MiB/s (78.7MB/s)(750MiB/10003msec) 00:13:18.557 slat (usec): min=2, max=3438, avg= 5.72, stdev=17.52 00:13:18.557 clat (usec): min=84, max=9067, avg=1028.90, stdev=932.02 00:13:18.557 lat (usec): min=87, max=9078, avg=1034.62, stdev=933.05 00:13:18.557 clat percentiles (usec): 00:13:18.557 | 50.000th=[ 644], 99.000th=[ 4113], 99.900th=[ 5800], 99.990th=[ 6849], 00:13:18.557 | 99.999th=[ 9110] 00:13:18.557 write: IOPS=19.7k, BW=76.9MiB/s (80.6MB/s)(769MiB/10003msec); 0 zone resets 00:13:18.557 slat (usec): min=3, max=4656, avg=33.47, stdev=133.36 00:13:18.557 clat (usec): min=53, max=11653, avg=1159.68, stdev=1054.04 00:13:18.557 lat (usec): min=88, max=11673, avg=1193.15, stdev=1071.80 00:13:18.557 clat percentiles (usec): 00:13:18.557 | 50.000th=[ 701], 99.000th=[ 4686], 99.900th=[ 6521], 99.990th=[ 9110], 00:13:18.557 | 99.999th=[11600] 00:13:18.557 bw ( KiB/s): min=41472, max=186224, per=100.00%, avg=80287.26, stdev=8254.71, samples=114 00:13:18.557 iops : min=10368, max=46554, avg=20071.00, stdev=2063.67, samples=114 00:13:18.557 lat (usec) : 100=0.10%, 250=11.53%, 500=27.14%, 750=14.12%, 1000=7.89% 00:13:18.557 lat (msec) : 2=21.97%, 4=15.63%, 10=1.62%, 20=0.01% 00:13:18.557 cpu : usr=45.70%, sys=32.63%, ctx=5832, majf=0, minf=18073 00:13:18.557 IO depths : 1=11.8%, 2=24.2%, 4=50.8%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:18.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.557 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.557 issued rwts: total=192086,196835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.557 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:18.557 00:13:18.557 Run status group 0 (all jobs): 00:13:18.557 READ: bw=75.0MiB/s (78.7MB/s), 75.0MiB/s-75.0MiB/s (78.7MB/s-78.7MB/s), io=750MiB (787MB), run=10003-10003msec 00:13:18.557 WRITE: bw=76.9MiB/s (80.6MB/s), 76.9MiB/s-76.9MiB/s (80.6MB/s-80.6MB/s), io=769MiB (806MB), run=10003-10003msec 00:13:18.557 ----------------------------------------------------- 00:13:18.557 Suppressions used: 00:13:18.557 count bytes template 00:13:18.557 6 48 /usr/src/fio/parse.c 00:13:18.557 4614 442944 /usr/src/fio/iolog.c 00:13:18.557 1 8 libtcmalloc_minimal.so 00:13:18.557 1 904 libcrypto.so 00:13:18.557 ----------------------------------------------------- 00:13:18.557 00:13:18.557 00:13:18.557 real 0m12.100s 00:13:18.557 user 0m28.995s 00:13:18.557 sys 0m19.944s 00:13:18.557 23:57:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.557 ************************************ 00:13:18.558 END TEST bdev_fio_rw_verify 00:13:18.558 ************************************ 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e6208c4a-de4f-4abe-9d1e-11d53988130b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e6208c4a-de4f-4abe-9d1e-11d53988130b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2e5dc4d1-17f4-4a83-bbda-4a9ee6336d03"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2e5dc4d1-17f4-4a83-bbda-4a9ee6336d03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "3eda4a3e-5d39-4c3f-9089-45c2b77278c8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3eda4a3e-5d39-4c3f-9089-45c2b77278c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "a942bd91-f4d2-42df-90bc-d111a079807f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a942bd91-f4d2-42df-90bc-d111a079807f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c63c382a-0177-48cd-a115-7ddd303d9311"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c63c382a-0177-48cd-a115-7ddd303d9311",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e8d4ff22-fd37-49e9-8816-3909bce2f1b3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e8d4ff22-fd37-49e9-8816-3909bce2f1b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.558 /home/vagrant/spdk_repo/spdk 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:18.558 00:13:18.558 real 0m12.280s 00:13:18.558 user 0m29.069s 00:13:18.558 sys 0m20.030s 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.558 23:57:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:18.558 ************************************ 00:13:18.558 END TEST bdev_fio 00:13:18.558 ************************************ 00:13:18.558 23:57:24 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:18.558 23:57:24 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:18.558 23:57:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:18.558 23:57:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.558 23:57:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.558 ************************************ 00:13:18.558 START TEST bdev_verify 00:13:18.558 ************************************ 00:13:18.558 23:57:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:18.558 [2024-11-18 23:57:25.038720] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:18.558 [2024-11-18 23:57:25.038866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70165 ] 00:13:18.558 [2024-11-18 23:57:25.202821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:18.820 [2024-11-18 23:57:25.327158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.820 [2024-11-18 23:57:25.327189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.393 Running I/O for 5 seconds... 00:13:21.279 22272.00 IOPS, 87.00 MiB/s [2024-11-18T23:57:29.361Z] 22704.00 IOPS, 88.69 MiB/s [2024-11-18T23:57:30.305Z] 22794.67 IOPS, 89.04 MiB/s [2024-11-18T23:57:30.876Z] 23504.00 IOPS, 91.81 MiB/s [2024-11-18T23:57:31.137Z] 23564.00 IOPS, 92.05 MiB/s 00:13:24.445 Latency(us) 00:13:24.445 [2024-11-18T23:57:31.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.445 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x0 length 0xa0000 00:13:24.445 nvme0n1 : 5.06 1871.78 7.31 0.00 0.00 68258.93 10737.82 72593.72 00:13:24.445 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0xa0000 length 0xa0000 00:13:24.445 nvme0n1 : 5.07 1844.09 7.20 0.00 0.00 69280.62 10032.05 70173.93 00:13:24.445 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x0 length 0xbd0bd 00:13:24.445 nvme1n1 : 5.07 2254.61 8.81 0.00 0.00 56358.23 6755.25 58074.98 00:13:24.445 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:24.445 nvme1n1 : 5.06 2294.20 8.96 0.00 0.00 55472.36 4411.08 60898.07 00:13:24.445 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x0 length 0x80000 00:13:24.445 nvme2n1 : 5.06 1971.60 7.70 0.00 0.00 64313.40 5747.00 66544.25 00:13:24.445 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x80000 length 0x80000 00:13:24.445 nvme2n1 : 5.06 1873.34 7.32 0.00 0.00 67980.62 7662.67 69367.34 00:13:24.445 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x0 length 0x80000 00:13:24.445 nvme2n2 : 5.07 1892.80 7.39 0.00 0.00 66838.48 5721.80 60494.77 00:13:24.445 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x80000 length 0x80000 00:13:24.445 nvme2n2 : 5.06 1845.48 7.21 0.00 0.00 68674.58 7662.67 60898.07 00:13:24.445 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x0 length 0x80000 00:13:24.445 nvme2n3 : 5.08 1888.20 7.38 0.00 0.00 66880.88 8771.74 67754.14 00:13:24.445 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x80000 length 0x80000 00:13:24.445 nvme2n3 : 5.08 1840.80 7.19 0.00 0.00 68702.18 10132.87 60898.07 00:13:24.445 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x0 length 0x20000 00:13:24.445 nvme3n1 : 5.08 1890.51 7.38 0.00 0.00 66699.29 5444.53 65737.65 00:13:24.445 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.445 Verification LBA range: start 0x20000 length 0x20000 00:13:24.445 nvme3n1 : 5.08 1840.13 7.19 0.00 0.00 68606.58 5923.45 64124.46 00:13:24.445 [2024-11-18T23:57:31.137Z] =================================================================================================================== 00:13:24.445 [2024-11-18T23:57:31.137Z] Total : 23307.53 91.05 0.00 0.00 65317.70 4411.08 72593.72 00:13:25.017 00:13:25.017 real 0m6.732s 00:13:25.017 user 0m10.971s 00:13:25.017 sys 0m1.378s 00:13:25.017 23:57:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.017 ************************************ 00:13:25.017 END TEST bdev_verify 00:13:25.017 ************************************ 00:13:25.017 23:57:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:25.278 23:57:31 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:25.278 23:57:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:25.278 23:57:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.278 23:57:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.278 ************************************ 00:13:25.278 START TEST bdev_verify_big_io 00:13:25.278 ************************************ 00:13:25.278 23:57:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:25.278 [2024-11-18 23:57:31.844920] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:25.278 [2024-11-18 23:57:31.845073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70269 ] 00:13:25.538 [2024-11-18 23:57:32.014668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.538 [2024-11-18 23:57:32.136044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.538 [2024-11-18 23:57:32.136175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.112 Running I/O for 5 seconds... 00:13:32.344 2136.00 IOPS, 133.50 MiB/s [2024-11-18T23:57:39.036Z] 3446.00 IOPS, 215.38 MiB/s [2024-11-18T23:57:39.036Z] 2788.00 IOPS, 174.25 MiB/s 00:13:32.344 Latency(us) 00:13:32.344 [2024-11-18T23:57:39.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.344 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x0 length 0xa000 00:13:32.344 nvme0n1 : 6.09 99.76 6.23 0.00 0.00 1231850.28 84289.38 1561571.64 00:13:32.344 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0xa000 length 0xa000 00:13:32.344 nvme0n1 : 5.83 119.47 7.47 0.00 0.00 1003832.47 137928.07 1025991.29 00:13:32.344 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x0 length 0xbd0b 00:13:32.344 nvme1n1 : 5.92 62.18 3.89 0.00 0.00 1913627.09 47185.92 3407065.40 00:13:32.344 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:32.344 nvme1n1 : 5.85 164.05 10.25 0.00 0.00 724598.94 11241.94 961463.53 00:13:32.344 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x0 length 0x8000 00:13:32.344 nvme2n1 : 6.05 84.64 5.29 0.00 0.00 1334054.60 125829.12 1103424.59 00:13:32.344 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x8000 length 0x8000 00:13:32.344 nvme2n1 : 5.94 97.01 6.06 0.00 0.00 1190969.76 196809.65 1922927.06 00:13:32.344 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x0 length 0x8000 00:13:32.344 nvme2n2 : 6.10 116.11 7.26 0.00 0.00 922985.56 51622.20 2039077.02 00:13:32.344 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x8000 length 0x8000 00:13:32.344 nvme2n2 : 5.92 137.76 8.61 0.00 0.00 827596.12 66140.95 980821.86 00:13:32.344 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x0 length 0x8000 00:13:32.344 nvme2n3 : 6.14 114.64 7.17 0.00 0.00 898529.14 5772.21 2103604.78 00:13:32.344 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x8000 length 0x8000 00:13:32.344 nvme2n3 : 5.93 116.06 7.25 0.00 0.00 949958.97 66544.25 2077793.67 00:13:32.344 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x0 length 0x2000 00:13:32.344 nvme3n1 : 6.32 172.28 10.77 0.00 0.00 573224.23 1569.08 2413337.99 00:13:32.344 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.344 Verification LBA range: start 0x2000 length 0x2000 00:13:32.344 nvme3n1 : 5.94 161.55 10.10 0.00 0.00 665472.13 7914.73 787238.60 00:13:32.344 [2024-11-18T23:57:39.036Z] =================================================================================================================== 00:13:32.344 [2024-11-18T23:57:39.036Z] Total : 1445.51 90.34 0.00 0.00 932052.78 1569.08 3407065.40 00:13:33.731 00:13:33.731 real 0m8.211s 00:13:33.731 user 0m15.025s 00:13:33.731 sys 0m0.459s 00:13:33.731 ************************************ 00:13:33.731 END TEST bdev_verify_big_io 00:13:33.731 ************************************ 00:13:33.731 23:57:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.731 23:57:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.731 23:57:40 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:33.731 23:57:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:33.731 23:57:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.731 23:57:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.732 ************************************ 00:13:33.732 START TEST bdev_write_zeroes 00:13:33.732 ************************************ 00:13:33.732 23:57:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:33.732 [2024-11-18 23:57:40.127200] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:33.732 [2024-11-18 23:57:40.127348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70380 ] 00:13:33.732 [2024-11-18 23:57:40.291035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.732 [2024-11-18 23:57:40.411347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.303 Running I/O for 1 seconds... 00:13:35.249 87584.00 IOPS, 342.12 MiB/s 00:13:35.249 Latency(us) 00:13:35.249 [2024-11-18T23:57:41.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.249 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.249 nvme0n1 : 1.02 14342.23 56.02 0.00 0.00 8914.80 5822.62 25609.45 00:13:35.249 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.249 nvme1n1 : 1.02 15360.78 60.00 0.00 0.00 8315.40 4335.46 19559.98 00:13:35.249 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.249 nvme2n1 : 1.02 14381.60 56.18 0.00 0.00 8816.37 3150.77 22584.71 00:13:35.249 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.249 nvme2n2 : 1.02 14296.28 55.84 0.00 0.00 8861.42 4915.20 23391.31 00:13:35.249 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.249 nvme2n3 : 1.03 14233.10 55.60 0.00 0.00 8891.32 4839.58 26012.75 00:13:35.249 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.249 nvme3n1 : 1.03 14216.58 55.53 0.00 0.00 8895.61 4864.79 26617.70 00:13:35.249 [2024-11-18T23:57:41.941Z] =================================================================================================================== 00:13:35.249 [2024-11-18T23:57:41.941Z] Total : 86830.58 339.18 0.00 0.00 8776.97 3150.77 26617.70 00:13:36.193 00:13:36.193 real 0m2.613s 00:13:36.193 user 0m1.932s 00:13:36.193 sys 0m0.471s 00:13:36.193 23:57:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.193 ************************************ 00:13:36.193 END TEST bdev_write_zeroes 00:13:36.193 ************************************ 00:13:36.193 23:57:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:36.193 23:57:42 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.193 23:57:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:36.193 23:57:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.193 23:57:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.193 ************************************ 00:13:36.193 START TEST bdev_json_nonenclosed 00:13:36.193 ************************************ 00:13:36.193 23:57:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.193 [2024-11-18 23:57:42.815531] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:36.193 [2024-11-18 23:57:42.815680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70426 ] 00:13:36.454 [2024-11-18 23:57:42.980578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.454 [2024-11-18 23:57:43.100013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.454 [2024-11-18 23:57:43.100137] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:36.454 [2024-11-18 23:57:43.100158] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.454 [2024-11-18 23:57:43.100169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.715 00:13:36.715 real 0m0.543s 00:13:36.715 user 0m0.331s 00:13:36.715 sys 0m0.106s 00:13:36.715 ************************************ 00:13:36.715 END TEST bdev_json_nonenclosed 00:13:36.715 ************************************ 00:13:36.715 23:57:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.715 23:57:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:36.715 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.715 23:57:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:36.715 23:57:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.715 23:57:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.715 ************************************ 00:13:36.715 START TEST bdev_json_nonarray 00:13:36.715 ************************************ 00:13:36.715 23:57:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.977 [2024-11-18 23:57:43.423568] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:36.977 [2024-11-18 23:57:43.423708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70453 ] 00:13:36.977 [2024-11-18 23:57:43.591454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.239 [2024-11-18 23:57:43.708509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.239 [2024-11-18 23:57:43.708608] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:37.239 [2024-11-18 23:57:43.708628] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:37.239 [2024-11-18 23:57:43.708639] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:37.239 00:13:37.239 real 0m0.549s 00:13:37.239 user 0m0.332s 00:13:37.239 sys 0m0.111s 00:13:37.239 23:57:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.239 ************************************ 00:13:37.239 END TEST bdev_json_nonarray 00:13:37.239 ************************************ 00:13:37.239 23:57:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:37.500 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:37.500 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:37.500 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:37.500 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:37.500 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:37.501 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:37.501 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:37.501 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:37.501 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:37.501 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:37.501 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:37.501 23:57:43 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:37.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:43.055 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.055 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.999 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.999 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:44.262 00:13:44.262 real 0m59.960s 00:13:44.262 user 1m26.909s 00:13:44.262 sys 0m35.885s 00:13:44.262 ************************************ 00:13:44.262 END TEST blockdev_xnvme 00:13:44.262 ************************************ 00:13:44.262 23:57:50 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.262 23:57:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.262 23:57:50 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:44.262 23:57:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.262 23:57:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.262 23:57:50 -- common/autotest_common.sh@10 -- # set +x 00:13:44.262 ************************************ 00:13:44.262 START TEST ublk 00:13:44.262 ************************************ 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:44.262 * Looking for test storage... 00:13:44.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:44.262 23:57:50 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.262 23:57:50 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.262 23:57:50 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.262 23:57:50 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.262 23:57:50 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.262 23:57:50 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.262 23:57:50 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.262 23:57:50 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.262 23:57:50 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.262 23:57:50 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.262 23:57:50 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.262 23:57:50 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:44.262 23:57:50 ublk -- scripts/common.sh@345 -- # : 1 00:13:44.262 23:57:50 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.262 23:57:50 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.262 23:57:50 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:44.262 23:57:50 ublk -- scripts/common.sh@353 -- # local d=1 00:13:44.262 23:57:50 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.262 23:57:50 ublk -- scripts/common.sh@355 -- # echo 1 00:13:44.262 23:57:50 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.262 23:57:50 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:44.262 23:57:50 ublk -- scripts/common.sh@353 -- # local d=2 00:13:44.262 23:57:50 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.262 23:57:50 ublk -- scripts/common.sh@355 -- # echo 2 00:13:44.262 23:57:50 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.262 23:57:50 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.262 23:57:50 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.262 23:57:50 ublk -- scripts/common.sh@368 -- # return 0 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:44.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.262 --rc genhtml_branch_coverage=1 00:13:44.262 --rc genhtml_function_coverage=1 00:13:44.262 --rc genhtml_legend=1 00:13:44.262 --rc geninfo_all_blocks=1 00:13:44.262 --rc geninfo_unexecuted_blocks=1 00:13:44.262 00:13:44.262 ' 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:44.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.262 --rc genhtml_branch_coverage=1 00:13:44.262 --rc genhtml_function_coverage=1 00:13:44.262 --rc genhtml_legend=1 00:13:44.262 --rc geninfo_all_blocks=1 00:13:44.262 --rc geninfo_unexecuted_blocks=1 00:13:44.262 00:13:44.262 ' 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:44.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.262 --rc genhtml_branch_coverage=1 00:13:44.262 --rc genhtml_function_coverage=1 00:13:44.262 --rc genhtml_legend=1 00:13:44.262 --rc geninfo_all_blocks=1 00:13:44.262 --rc geninfo_unexecuted_blocks=1 00:13:44.262 00:13:44.262 ' 00:13:44.262 23:57:50 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:44.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.262 --rc genhtml_branch_coverage=1 00:13:44.262 --rc genhtml_function_coverage=1 00:13:44.262 --rc genhtml_legend=1 00:13:44.262 --rc geninfo_all_blocks=1 00:13:44.262 --rc geninfo_unexecuted_blocks=1 00:13:44.262 00:13:44.262 ' 00:13:44.262 23:57:50 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:44.262 23:57:50 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:44.262 23:57:50 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:44.262 23:57:50 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:44.262 23:57:50 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:44.263 23:57:50 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:44.263 23:57:50 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:44.263 23:57:50 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:44.263 23:57:50 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:44.263 23:57:50 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:44.263 23:57:50 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.263 23:57:50 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.263 23:57:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.524 ************************************ 00:13:44.524 START TEST test_save_ublk_config 00:13:44.524 ************************************ 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70756 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70756 00:13:44.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70756 ']' 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.524 23:57:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:44.524 [2024-11-18 23:57:51.044820] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:44.524 [2024-11-18 23:57:51.044966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70756 ] 00:13:44.524 [2024-11-18 23:57:51.209978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.786 [2024-11-18 23:57:51.329443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.360 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.360 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:45.360 23:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:45.360 23:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:45.360 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.360 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:45.360 [2024-11-18 23:57:52.044148] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:45.360 [2024-11-18 23:57:52.045000] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:45.622 malloc0 00:13:45.622 [2024-11-18 23:57:52.116281] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:45.622 [2024-11-18 23:57:52.116379] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:45.622 [2024-11-18 23:57:52.116390] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:45.622 [2024-11-18 23:57:52.116398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:45.622 [2024-11-18 23:57:52.124432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:45.622 [2024-11-18 23:57:52.124461] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:45.622 [2024-11-18 23:57:52.132169] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:45.622 [2024-11-18 23:57:52.132291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:45.622 [2024-11-18 23:57:52.149155] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:45.622 0 00:13:45.622 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.622 23:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:45.622 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.622 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:45.884 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.884 23:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:45.884 "subsystems": [ 00:13:45.884 { 00:13:45.884 "subsystem": "fsdev", 00:13:45.884 "config": [ 00:13:45.884 { 00:13:45.884 "method": "fsdev_set_opts", 00:13:45.884 "params": { 00:13:45.884 "fsdev_io_pool_size": 65535, 00:13:45.884 "fsdev_io_cache_size": 256 00:13:45.884 } 00:13:45.884 } 00:13:45.884 ] 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "subsystem": "keyring", 00:13:45.884 "config": [] 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "subsystem": "iobuf", 00:13:45.884 "config": [ 00:13:45.884 { 00:13:45.884 "method": "iobuf_set_options", 00:13:45.884 "params": { 00:13:45.884 "small_pool_count": 8192, 00:13:45.884 "large_pool_count": 1024, 00:13:45.884 "small_bufsize": 8192, 00:13:45.884 "large_bufsize": 135168, 00:13:45.884 "enable_numa": false 00:13:45.884 } 00:13:45.884 } 00:13:45.884 ] 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "subsystem": "sock", 00:13:45.884 "config": [ 00:13:45.884 { 00:13:45.884 "method": "sock_set_default_impl", 00:13:45.884 "params": { 00:13:45.884 "impl_name": "posix" 00:13:45.884 } 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "method": "sock_impl_set_options", 00:13:45.884 "params": { 00:13:45.884 "impl_name": "ssl", 00:13:45.884 "recv_buf_size": 4096, 00:13:45.884 "send_buf_size": 4096, 00:13:45.884 "enable_recv_pipe": true, 00:13:45.884 "enable_quickack": false, 00:13:45.884 "enable_placement_id": 0, 00:13:45.884 "enable_zerocopy_send_server": true, 00:13:45.884 "enable_zerocopy_send_client": false, 00:13:45.884 "zerocopy_threshold": 0, 00:13:45.884 "tls_version": 0, 00:13:45.884 "enable_ktls": false 00:13:45.884 } 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "method": "sock_impl_set_options", 00:13:45.884 "params": { 00:13:45.884 "impl_name": "posix", 00:13:45.884 "recv_buf_size": 2097152, 00:13:45.884 "send_buf_size": 2097152, 00:13:45.884 "enable_recv_pipe": true, 00:13:45.884 "enable_quickack": false, 00:13:45.884 "enable_placement_id": 0, 00:13:45.884 "enable_zerocopy_send_server": true, 00:13:45.884 "enable_zerocopy_send_client": false, 00:13:45.884 "zerocopy_threshold": 0, 00:13:45.884 "tls_version": 0, 00:13:45.884 "enable_ktls": false 00:13:45.884 } 00:13:45.884 } 00:13:45.884 ] 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "subsystem": "vmd", 00:13:45.884 "config": [] 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "subsystem": "accel", 00:13:45.884 "config": [ 00:13:45.884 { 00:13:45.884 "method": "accel_set_options", 00:13:45.884 "params": { 00:13:45.884 "small_cache_size": 128, 00:13:45.884 "large_cache_size": 16, 00:13:45.884 "task_count": 2048, 00:13:45.884 "sequence_count": 2048, 00:13:45.884 "buf_count": 2048 00:13:45.884 } 00:13:45.884 } 00:13:45.884 ] 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "subsystem": "bdev", 00:13:45.884 "config": [ 00:13:45.884 { 00:13:45.884 "method": "bdev_set_options", 00:13:45.884 "params": { 00:13:45.884 "bdev_io_pool_size": 65535, 00:13:45.884 "bdev_io_cache_size": 256, 00:13:45.884 "bdev_auto_examine": true, 00:13:45.884 "iobuf_small_cache_size": 128, 00:13:45.884 "iobuf_large_cache_size": 16 00:13:45.884 } 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "method": "bdev_raid_set_options", 00:13:45.884 "params": { 00:13:45.884 "process_window_size_kb": 1024, 00:13:45.884 "process_max_bandwidth_mb_sec": 0 00:13:45.884 } 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "method": "bdev_iscsi_set_options", 00:13:45.884 "params": { 00:13:45.884 "timeout_sec": 30 00:13:45.884 } 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "method": "bdev_nvme_set_options", 00:13:45.884 "params": { 00:13:45.884 "action_on_timeout": "none", 00:13:45.884 "timeout_us": 0, 00:13:45.884 "timeout_admin_us": 0, 00:13:45.884 "keep_alive_timeout_ms": 10000, 00:13:45.884 "arbitration_burst": 0, 00:13:45.884 "low_priority_weight": 0, 00:13:45.884 "medium_priority_weight": 0, 00:13:45.884 "high_priority_weight": 0, 00:13:45.884 "nvme_adminq_poll_period_us": 10000, 00:13:45.884 "nvme_ioq_poll_period_us": 0, 00:13:45.884 "io_queue_requests": 0, 00:13:45.884 "delay_cmd_submit": true, 00:13:45.884 "transport_retry_count": 4, 00:13:45.884 "bdev_retry_count": 3, 00:13:45.884 "transport_ack_timeout": 0, 00:13:45.884 "ctrlr_loss_timeout_sec": 0, 00:13:45.884 "reconnect_delay_sec": 0, 00:13:45.884 "fast_io_fail_timeout_sec": 0, 00:13:45.884 "disable_auto_failback": false, 00:13:45.884 "generate_uuids": false, 00:13:45.884 "transport_tos": 0, 00:13:45.884 "nvme_error_stat": false, 00:13:45.884 "rdma_srq_size": 0, 00:13:45.884 "io_path_stat": false, 00:13:45.884 "allow_accel_sequence": false, 00:13:45.884 "rdma_max_cq_size": 0, 00:13:45.884 "rdma_cm_event_timeout_ms": 0, 00:13:45.884 "dhchap_digests": [ 00:13:45.885 "sha256", 00:13:45.885 "sha384", 00:13:45.885 "sha512" 00:13:45.885 ], 00:13:45.885 "dhchap_dhgroups": [ 00:13:45.885 "null", 00:13:45.885 "ffdhe2048", 00:13:45.885 "ffdhe3072", 00:13:45.885 "ffdhe4096", 00:13:45.885 "ffdhe6144", 00:13:45.885 "ffdhe8192" 00:13:45.885 ] 00:13:45.885 } 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "method": "bdev_nvme_set_hotplug", 00:13:45.885 "params": { 00:13:45.885 "period_us": 100000, 00:13:45.885 "enable": false 00:13:45.885 } 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "method": "bdev_malloc_create", 00:13:45.885 "params": { 00:13:45.885 "name": "malloc0", 00:13:45.885 "num_blocks": 8192, 00:13:45.885 "block_size": 4096, 00:13:45.885 "physical_block_size": 4096, 00:13:45.885 "uuid": "9d73c5be-b41a-4157-89c1-3feebf1300b7", 00:13:45.885 "optimal_io_boundary": 0, 00:13:45.885 "md_size": 0, 00:13:45.885 "dif_type": 0, 00:13:45.885 "dif_is_head_of_md": false, 00:13:45.885 "dif_pi_format": 0 00:13:45.885 } 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "method": "bdev_wait_for_examine" 00:13:45.885 } 00:13:45.885 ] 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "subsystem": "scsi", 00:13:45.885 "config": null 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "subsystem": "scheduler", 00:13:45.885 "config": [ 00:13:45.885 { 00:13:45.885 "method": "framework_set_scheduler", 00:13:45.885 "params": { 00:13:45.885 "name": "static" 00:13:45.885 } 00:13:45.885 } 00:13:45.885 ] 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "subsystem": "vhost_scsi", 00:13:45.885 "config": [] 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "subsystem": "vhost_blk", 00:13:45.885 "config": [] 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "subsystem": "ublk", 00:13:45.885 "config": [ 00:13:45.885 { 00:13:45.885 "method": "ublk_create_target", 00:13:45.885 "params": { 00:13:45.885 "cpumask": "1" 00:13:45.885 } 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "method": "ublk_start_disk", 00:13:45.885 "params": { 00:13:45.885 "bdev_name": "malloc0", 00:13:45.885 "ublk_id": 0, 00:13:45.885 "num_queues": 1, 00:13:45.885 "queue_depth": 128 00:13:45.885 } 00:13:45.885 } 00:13:45.885 ] 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "subsystem": "nbd", 00:13:45.885 "config": [] 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "subsystem": "nvmf", 00:13:45.885 "config": [ 00:13:45.885 { 00:13:45.885 "method": "nvmf_set_config", 00:13:45.885 "params": { 00:13:45.885 "discovery_filter": "match_any", 00:13:45.885 "admin_cmd_passthru": { 00:13:45.885 "identify_ctrlr": false 00:13:45.885 }, 00:13:45.885 "dhchap_digests": [ 00:13:45.885 "sha256", 00:13:45.885 "sha384", 00:13:45.885 "sha512" 00:13:45.885 ], 00:13:45.885 "dhchap_dhgroups": [ 00:13:45.885 "null", 00:13:45.885 "ffdhe2048", 00:13:45.885 "ffdhe3072", 00:13:45.885 "ffdhe4096", 00:13:45.885 "ffdhe6144", 00:13:45.885 "ffdhe8192" 00:13:45.885 ] 00:13:45.885 } 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "method": "nvmf_set_max_subsystems", 00:13:45.885 "params": { 00:13:45.885 "max_subsystems": 1024 00:13:45.885 } 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "method": "nvmf_set_crdt", 00:13:45.885 "params": { 00:13:45.885 "crdt1": 0, 00:13:45.885 "crdt2": 0, 00:13:45.885 "crdt3": 0 00:13:45.885 } 00:13:45.885 } 00:13:45.885 ] 00:13:45.885 }, 00:13:45.885 { 00:13:45.885 "subsystem": "iscsi", 00:13:45.885 "config": [ 00:13:45.885 { 00:13:45.885 "method": "iscsi_set_options", 00:13:45.885 "params": { 00:13:45.885 "node_base": "iqn.2016-06.io.spdk", 00:13:45.885 "max_sessions": 128, 00:13:45.885 "max_connections_per_session": 2, 00:13:45.885 "max_queue_depth": 64, 00:13:45.885 "default_time2wait": 2, 00:13:45.885 "default_time2retain": 20, 00:13:45.885 "first_burst_length": 8192, 00:13:45.885 "immediate_data": true, 00:13:45.885 "allow_duplicated_isid": false, 00:13:45.885 "error_recovery_level": 0, 00:13:45.885 "nop_timeout": 60, 00:13:45.885 "nop_in_interval": 30, 00:13:45.885 "disable_chap": false, 00:13:45.885 "require_chap": false, 00:13:45.885 "mutual_chap": false, 00:13:45.885 "chap_group": 0, 00:13:45.885 "max_large_datain_per_connection": 64, 00:13:45.885 "max_r2t_per_connection": 4, 00:13:45.885 "pdu_pool_size": 36864, 00:13:45.885 "immediate_data_pool_size": 16384, 00:13:45.885 "data_out_pool_size": 2048 00:13:45.885 } 00:13:45.885 } 00:13:45.885 ] 00:13:45.885 } 00:13:45.885 ] 00:13:45.885 }' 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70756 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70756 ']' 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70756 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70756 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.885 killing process with pid 70756 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70756' 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70756 00:13:45.885 23:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70756 00:13:47.270 [2024-11-18 23:57:53.560786] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:47.270 [2024-11-18 23:57:53.600197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:47.270 [2024-11-18 23:57:53.600342] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:47.270 [2024-11-18 23:57:53.608172] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:47.270 [2024-11-18 23:57:53.608239] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:47.270 [2024-11-18 23:57:53.608253] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:47.270 [2024-11-18 23:57:53.608280] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:47.270 [2024-11-18 23:57:53.608438] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:48.708 23:57:55 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70815 00:13:48.708 23:57:55 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70815 00:13:48.708 23:57:55 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70815 ']' 00:13:48.708 23:57:55 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.708 23:57:55 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:48.708 "subsystems": [ 00:13:48.708 { 00:13:48.708 "subsystem": "fsdev", 00:13:48.708 "config": [ 00:13:48.708 { 00:13:48.708 "method": "fsdev_set_opts", 00:13:48.708 "params": { 00:13:48.708 "fsdev_io_pool_size": 65535, 00:13:48.708 "fsdev_io_cache_size": 256 00:13:48.708 } 00:13:48.708 } 00:13:48.708 ] 00:13:48.708 }, 00:13:48.708 { 00:13:48.708 "subsystem": "keyring", 00:13:48.708 "config": [] 00:13:48.708 }, 00:13:48.708 { 00:13:48.708 "subsystem": "iobuf", 00:13:48.708 "config": [ 00:13:48.708 { 00:13:48.708 "method": "iobuf_set_options", 00:13:48.708 "params": { 00:13:48.708 "small_pool_count": 8192, 00:13:48.708 "large_pool_count": 1024, 00:13:48.708 "small_bufsize": 8192, 00:13:48.708 "large_bufsize": 135168, 00:13:48.708 "enable_numa": false 00:13:48.708 } 00:13:48.708 } 00:13:48.708 ] 00:13:48.708 }, 00:13:48.708 { 00:13:48.708 "subsystem": "sock", 00:13:48.708 "config": [ 00:13:48.708 { 00:13:48.708 "method": "sock_set_default_impl", 00:13:48.708 "params": { 00:13:48.708 "impl_name": "posix" 00:13:48.708 } 00:13:48.708 }, 00:13:48.708 { 00:13:48.708 "method": "sock_impl_set_options", 00:13:48.708 "params": { 00:13:48.708 "impl_name": "ssl", 00:13:48.708 "recv_buf_size": 4096, 00:13:48.708 "send_buf_size": 4096, 00:13:48.708 "enable_recv_pipe": true, 00:13:48.708 "enable_quickack": false, 00:13:48.708 "enable_placement_id": 0, 00:13:48.708 "enable_zerocopy_send_server": true, 00:13:48.708 "enable_zerocopy_send_client": false, 00:13:48.708 "zerocopy_threshold": 0, 00:13:48.708 "tls_version": 0, 00:13:48.709 "enable_ktls": false 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "sock_impl_set_options", 00:13:48.709 "params": { 00:13:48.709 "impl_name": "posix", 00:13:48.709 "recv_buf_size": 2097152, 00:13:48.709 "send_buf_size": 2097152, 00:13:48.709 "enable_recv_pipe": true, 00:13:48.709 "enable_quickack": false, 00:13:48.709 "enable_placement_id": 0, 00:13:48.709 "enable_zerocopy_send_server": true, 00:13:48.709 "enable_zerocopy_send_client": false, 00:13:48.709 "zerocopy_threshold": 0, 00:13:48.709 "tls_version": 0, 00:13:48.709 "enable_ktls": false 00:13:48.709 } 00:13:48.709 } 00:13:48.709 ] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "vmd", 00:13:48.709 "config": [] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "accel", 00:13:48.709 "config": [ 00:13:48.709 { 00:13:48.709 "method": "accel_set_options", 00:13:48.709 "params": { 00:13:48.709 "small_cache_size": 128, 00:13:48.709 "large_cache_size": 16, 00:13:48.709 "task_count": 2048, 00:13:48.709 "sequence_count": 2048, 00:13:48.709 "buf_count": 2048 00:13:48.709 } 00:13:48.709 } 00:13:48.709 ] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "bdev", 00:13:48.709 "config": [ 00:13:48.709 { 00:13:48.709 "method": "bdev_set_options", 00:13:48.709 "params": { 00:13:48.709 "bdev_io_pool_size": 65535, 00:13:48.709 "bdev_io_cache_size": 256, 00:13:48.709 "bdev_auto_examine": true, 00:13:48.709 "iobuf_small_cache_size": 128, 00:13:48.709 "iobuf_large_cache_size": 16 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "bdev_raid_set_options", 00:13:48.709 "params": { 00:13:48.709 "process_window_size_kb": 1024, 00:13:48.709 "process_max_bandwidth_mb_sec": 0 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "bdev_iscsi_set_options", 00:13:48.709 "params": { 00:13:48.709 "timeout_sec": 30 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "bdev_nvme_set_options", 00:13:48.709 "params": { 00:13:48.709 "action_on_timeout": "none", 00:13:48.709 "timeout_us": 0, 00:13:48.709 "timeout_admin_us": 0, 00:13:48.709 "keep_alive_timeout_ms": 10000, 00:13:48.709 "arbitration_burst": 0, 00:13:48.709 "low_priority_weight": 0, 00:13:48.709 "medium_priority_weight": 0, 00:13:48.709 "high_priority_weight": 0, 00:13:48.709 "nvme_adminq_poll_period_us": 10000, 00:13:48.709 "nvme_ioq_poll_period_us": 0, 00:13:48.709 "io_queue_requests": 0, 00:13:48.709 "delay_cmd_submit": true, 00:13:48.709 "transport_retry_count": 4, 00:13:48.709 "bdev_retry_count": 3, 00:13:48.709 "transport_ack_timeout": 0, 00:13:48.709 "ctrlr_loss_timeout_sec": 0, 00:13:48.709 "reconnect_delay_sec": 0, 00:13:48.709 "fast_io_fail_timeout_sec": 0, 00:13:48.709 "disable_auto_failback": false, 00:13:48.709 "generate_uuids": false, 00:13:48.709 "transport_tos": 0, 00:13:48.709 "nvme_error_stat": false, 00:13:48.709 "rdma_srq_size": 0, 00:13:48.709 "io_path_stat": false, 00:13:48.709 "allow_accel_sequence": false, 00:13:48.709 "rdma_max_cq_size": 0, 00:13:48.709 "rdma_cm_event_timeout_ms": 0, 00:13:48.709 "dhchap_digests": [ 00:13:48.709 "sha256", 00:13:48.709 "sha384", 00:13:48.709 "sha512" 00:13:48.709 ], 00:13:48.709 "dhchap_dhgroups": [ 00:13:48.709 "null", 00:13:48.709 "ffdhe2048", 00:13:48.709 "ffdhe3072", 00:13:48.709 "ffdhe4096", 00:13:48.709 "ffdhe6144", 00:13:48.709 "ffdhe8192" 00:13:48.709 ] 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "bdev_nvme_set_hotplug", 00:13:48.709 "params": { 00:13:48.709 "period_us": 100000, 00:13:48.709 "enable": false 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "bdev_malloc_create", 00:13:48.709 "params": { 00:13:48.709 "name": "malloc0", 00:13:48.709 "num_blocks": 8192, 00:13:48.709 "block_size": 4096, 00:13:48.709 "physical_block_size": 4096, 00:13:48.709 "uuid": "9d73c5be-b41a-4157-89c1-3feebf1300b7", 00:13:48.709 "optimal_io_boundary": 0, 00:13:48.709 "md_size": 0, 00:13:48.709 "dif_type": 0, 00:13:48.709 "dif_is_head_of_md": false, 00:13:48.709 "dif_pi_format": 0 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "bdev_wait_for_examine" 00:13:48.709 } 00:13:48.709 ] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "scsi", 00:13:48.709 "config": null 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "scheduler", 00:13:48.709 "config": [ 00:13:48.709 { 00:13:48.709 "method": "framework_set_scheduler", 00:13:48.709 "params": { 00:13:48.709 "name": "static" 00:13:48.709 } 00:13:48.709 } 00:13:48.709 ] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "vhost_scsi", 00:13:48.709 "config": [] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "vhost_blk", 00:13:48.709 "config": [] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "ublk", 00:13:48.709 "config": [ 00:13:48.709 { 00:13:48.709 "method": "ublk_create_target", 00:13:48.709 "params": { 00:13:48.709 "cpumask": "1" 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "ublk_start_disk", 00:13:48.709 "params": { 00:13:48.709 "bdev_name": "malloc0", 00:13:48.709 "ublk_id": 0, 00:13:48.709 "num_queues": 1, 00:13:48.709 "queue_depth": 128 00:13:48.709 } 00:13:48.709 } 00:13:48.709 ] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "nbd", 00:13:48.709 "config": [] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "nvmf", 00:13:48.709 "config": [ 00:13:48.709 { 00:13:48.709 "method": "nvmf_set_config", 00:13:48.709 "params": { 00:13:48.709 "discovery_filter": "match_any", 00:13:48.709 "admin_cmd_passthru": { 00:13:48.709 "identify_ctrlr": false 00:13:48.709 }, 00:13:48.709 "dhchap_digests": [ 00:13:48.709 "sha256", 00:13:48.709 "sha384", 00:13:48.709 "sha512" 00:13:48.709 ], 00:13:48.709 "dhchap_dhgroups": [ 00:13:48.709 "null", 00:13:48.709 "ffdhe2048", 00:13:48.709 "ffdhe3072", 00:13:48.709 "ffdhe4096", 00:13:48.709 "ffdhe6144", 00:13:48.709 "ffdhe8192" 00:13:48.709 ] 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "nvmf_set_max_subsystems", 00:13:48.709 "params": { 00:13:48.709 "max_subsystems": 1024 00:13:48.709 } 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "method": "nvmf_set_crdt", 00:13:48.709 "params": { 00:13:48.709 "crdt1": 0, 00:13:48.709 "crdt2": 0, 00:13:48.709 "crdt3": 0 00:13:48.709 } 00:13:48.709 } 00:13:48.709 ] 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "subsystem": "iscsi", 00:13:48.709 "config": [ 00:13:48.709 { 00:13:48.709 "method": "iscsi_set_options", 00:13:48.709 "params": { 00:13:48.709 "node_base": "iqn.2016-06.io.spdk", 00:13:48.709 "max_sessions": 128, 00:13:48.709 "max_connections_per_session": 2, 00:13:48.709 "max_queue_depth": 64, 00:13:48.709 "default_time2wait": 2, 00:13:48.709 "default_time2retain": 20, 00:13:48.709 "first_burst_length": 8192, 00:13:48.709 "immediate_data": true, 00:13:48.709 "allow_duplicated_isid": false, 00:13:48.709 "error_recovery_level": 0, 00:13:48.709 "nop_timeout": 60, 00:13:48.709 "nop_in_interval": 30, 00:13:48.709 "disable_chap": false, 00:13:48.709 "require_chap": false, 00:13:48.709 "mutual_chap": false, 00:13:48.709 "chap_group": 0, 00:13:48.709 "max_large_datain_per_connection": 64, 00:13:48.709 "max_r2t_per_connection": 4, 00:13:48.709 "pdu_pool_size": 36864, 00:13:48.709 "immediate_data_pool_size": 16384, 00:13:48.709 "data_out_pool_size": 2048 00:13:48.709 } 00:13:48.709 } 00:13:48.709 ] 00:13:48.709 } 00:13:48.709 ] 00:13:48.709 }' 00:13:48.709 23:57:55 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:48.709 23:57:55 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.709 23:57:55 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.709 23:57:55 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.709 23:57:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:48.709 [2024-11-18 23:57:55.169700] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:48.709 [2024-11-18 23:57:55.169841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70815 ] 00:13:48.709 [2024-11-18 23:57:55.332837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.970 [2024-11-18 23:57:55.418917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.542 [2024-11-18 23:57:56.053137] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:49.542 [2024-11-18 23:57:56.053770] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:49.542 [2024-11-18 23:57:56.061226] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:49.542 [2024-11-18 23:57:56.061284] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:49.542 [2024-11-18 23:57:56.061291] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:49.542 [2024-11-18 23:57:56.061297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.542 [2024-11-18 23:57:56.069228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.542 [2024-11-18 23:57:56.069245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.542 [2024-11-18 23:57:56.077143] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.542 [2024-11-18 23:57:56.077212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:49.542 [2024-11-18 23:57:56.094140] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.542 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.542 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:49.542 23:57:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:49.542 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.542 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.542 23:57:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:49.542 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.542 23:57:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70815 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70815 ']' 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70815 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70815 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.543 killing process with pid 70815 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70815' 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70815 00:13:49.543 23:57:56 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70815 00:13:50.926 [2024-11-18 23:57:57.182646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.926 [2024-11-18 23:57:57.229206] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.926 [2024-11-18 23:57:57.229297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.926 [2024-11-18 23:57:57.238147] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.926 [2024-11-18 23:57:57.238187] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:50.926 [2024-11-18 23:57:57.238193] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:50.926 [2024-11-18 23:57:57.238212] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:50.926 [2024-11-18 23:57:57.238319] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:51.869 23:57:58 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:51.869 00:13:51.869 real 0m7.466s 00:13:51.869 user 0m5.001s 00:13:51.869 sys 0m3.117s 00:13:51.869 23:57:58 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.869 ************************************ 00:13:51.869 23:57:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:51.869 END TEST test_save_ublk_config 00:13:51.869 ************************************ 00:13:51.869 23:57:58 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70883 00:13:51.869 23:57:58 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:51.869 23:57:58 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70883 00:13:51.869 23:57:58 ublk -- common/autotest_common.sh@835 -- # '[' -z 70883 ']' 00:13:51.869 23:57:58 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.869 23:57:58 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.869 23:57:58 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.869 23:57:58 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:51.869 23:57:58 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.869 23:57:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.869 [2024-11-18 23:57:58.547789] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:51.869 [2024-11-18 23:57:58.547929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70883 ] 00:13:52.130 [2024-11-18 23:57:58.714167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:52.391 [2024-11-18 23:57:58.839697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.391 [2024-11-18 23:57:58.839785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.963 23:57:59 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.963 23:57:59 ublk -- common/autotest_common.sh@868 -- # return 0 00:13:52.963 23:57:59 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:52.963 23:57:59 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:52.963 23:57:59 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.963 23:57:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:52.963 ************************************ 00:13:52.963 START TEST test_create_ublk 00:13:52.963 ************************************ 00:13:52.964 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:13:52.964 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:52.964 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.964 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:52.964 [2024-11-18 23:57:59.565150] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:52.964 [2024-11-18 23:57:59.567466] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:52.964 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.964 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:52.964 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:52.964 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.964 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.225 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:53.225 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:53.225 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.225 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 [2024-11-18 23:57:59.795325] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:53.225 [2024-11-18 23:57:59.795756] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:53.225 [2024-11-18 23:57:59.795771] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:53.225 [2024-11-18 23:57:59.795779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:53.225 [2024-11-18 23:57:59.803194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:53.225 [2024-11-18 23:57:59.803225] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.225 [2024-11-18 23:57:59.811161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.225 [2024-11-18 23:57:59.825215] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:53.225 [2024-11-18 23:57:59.838254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:53.225 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.225 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:53.225 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:53.225 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:53.225 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.225 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 23:57:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.225 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:53.225 { 00:13:53.225 "ublk_device": "/dev/ublkb0", 00:13:53.225 "id": 0, 00:13:53.225 "queue_depth": 512, 00:13:53.225 "num_queues": 4, 00:13:53.225 "bdev_name": "Malloc0" 00:13:53.225 } 00:13:53.225 ]' 00:13:53.225 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:53.225 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:53.225 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:53.486 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:53.486 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:53.486 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:53.486 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:53.486 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:53.486 23:57:59 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:53.486 23:58:00 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:53.486 23:58:00 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:53.486 23:58:00 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:53.486 fio: verification read phase will never start because write phase uses all of runtime 00:13:53.486 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:53.486 fio-3.35 00:13:53.486 Starting 1 process 00:14:05.715 00:14:05.715 fio_test: (groupid=0, jobs=1): err= 0: pid=70928: Mon Nov 18 23:58:10 2024 00:14:05.715 write: IOPS=14.4k, BW=56.2MiB/s (58.9MB/s)(562MiB/10001msec); 0 zone resets 00:14:05.715 clat (usec): min=36, max=9176, avg=68.74, stdev=136.42 00:14:05.715 lat (usec): min=36, max=9195, avg=69.17, stdev=136.44 00:14:05.715 clat percentiles (usec): 00:14:05.715 | 1.00th=[ 43], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 52], 00:14:05.715 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 66], 00:14:05.715 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 78], 95.00th=[ 82], 00:14:05.716 | 99.00th=[ 93], 99.50th=[ 111], 99.90th=[ 2966], 99.95th=[ 3720], 00:14:05.716 | 99.99th=[ 4015] 00:14:05.716 bw ( KiB/s): min=28392, max=75256, per=99.86%, avg=57472.00, stdev=10982.86, samples=19 00:14:05.716 iops : min= 7098, max=18814, avg=14368.00, stdev=2745.71, samples=19 00:14:05.716 lat (usec) : 50=13.88%, 100=85.50%, 250=0.33%, 500=0.04%, 750=0.01% 00:14:05.716 lat (usec) : 1000=0.01% 00:14:05.716 lat (msec) : 2=0.06%, 4=0.15%, 10=0.01% 00:14:05.716 cpu : usr=1.96%, sys=11.81%, ctx=143921, majf=0, minf=798 00:14:05.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.716 issued rwts: total=0,143893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.716 00:14:05.716 Run status group 0 (all jobs): 00:14:05.716 WRITE: bw=56.2MiB/s (58.9MB/s), 56.2MiB/s-56.2MiB/s (58.9MB/s-58.9MB/s), io=562MiB (589MB), run=10001-10001msec 00:14:05.716 00:14:05.716 Disk stats (read/write): 00:14:05.716 ublkb0: ios=0/142371, merge=0/0, ticks=0/8445, in_queue=8446, util=98.82% 00:14:05.716 23:58:10 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 [2024-11-18 23:58:10.267097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:05.716 [2024-11-18 23:58:10.304177] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:05.716 [2024-11-18 23:58:10.304854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:05.716 [2024-11-18 23:58:10.310154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:05.716 [2024-11-18 23:58:10.310431] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:05.716 [2024-11-18 23:58:10.310447] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.716 23:58:10 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 [2024-11-18 23:58:10.326211] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:05.716 request: 00:14:05.716 { 00:14:05.716 "ublk_id": 0, 00:14:05.716 "method": "ublk_stop_disk", 00:14:05.716 "req_id": 1 00:14:05.716 } 00:14:05.716 Got JSON-RPC error response 00:14:05.716 response: 00:14:05.716 { 00:14:05.716 "code": -19, 00:14:05.716 "message": "No such device" 00:14:05.716 } 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:05.716 23:58:10 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 [2024-11-18 23:58:10.342214] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:05.716 [2024-11-18 23:58:10.350137] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:05.716 [2024-11-18 23:58:10.350171] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.716 23:58:10 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.716 23:58:10 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:05.716 23:58:10 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.716 23:58:10 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:05.716 23:58:10 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:05.716 23:58:10 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:05.716 23:58:10 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.716 23:58:10 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:05.716 23:58:10 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:05.716 ************************************ 00:14:05.716 END TEST test_create_ublk 00:14:05.716 ************************************ 00:14:05.716 23:58:10 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:05.716 00:14:05.716 real 0m11.281s 00:14:05.716 user 0m0.516s 00:14:05.716 sys 0m1.254s 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.716 23:58:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 23:58:10 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:05.716 23:58:10 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.716 23:58:10 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.716 23:58:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 ************************************ 00:14:05.716 START TEST test_create_multi_ublk 00:14:05.716 ************************************ 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 [2024-11-18 23:58:10.882140] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:05.716 [2024-11-18 23:58:10.883912] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 [2024-11-18 23:58:11.110263] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:05.716 [2024-11-18 23:58:11.110591] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:05.716 [2024-11-18 23:58:11.110603] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:05.716 [2024-11-18 23:58:11.110612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.716 [2024-11-18 23:58:11.122411] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.716 [2024-11-18 23:58:11.122430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.716 [2024-11-18 23:58:11.134156] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.716 [2024-11-18 23:58:11.134691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:05.716 [2024-11-18 23:58:11.170151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.716 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.716 [2024-11-18 23:58:11.394244] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:05.717 [2024-11-18 23:58:11.394565] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:05.717 [2024-11-18 23:58:11.394578] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:05.717 [2024-11-18 23:58:11.394585] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.717 [2024-11-18 23:58:11.402164] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.717 [2024-11-18 23:58:11.402182] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.717 [2024-11-18 23:58:11.410154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.717 [2024-11-18 23:58:11.410680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:05.717 [2024-11-18 23:58:11.419173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.717 [2024-11-18 23:58:11.594239] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:05.717 [2024-11-18 23:58:11.594566] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:05.717 [2024-11-18 23:58:11.594579] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:05.717 [2024-11-18 23:58:11.594586] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.717 [2024-11-18 23:58:11.602151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.717 [2024-11-18 23:58:11.602171] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.717 [2024-11-18 23:58:11.610151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.717 [2024-11-18 23:58:11.610688] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:05.717 [2024-11-18 23:58:11.631148] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.717 [2024-11-18 23:58:11.806258] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:05.717 [2024-11-18 23:58:11.806580] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:05.717 [2024-11-18 23:58:11.806593] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:05.717 [2024-11-18 23:58:11.806599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.717 [2024-11-18 23:58:11.814157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.717 [2024-11-18 23:58:11.814174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.717 [2024-11-18 23:58:11.822154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.717 [2024-11-18 23:58:11.822689] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:05.717 [2024-11-18 23:58:11.825876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:05.717 { 00:14:05.717 "ublk_device": "/dev/ublkb0", 00:14:05.717 "id": 0, 00:14:05.717 "queue_depth": 512, 00:14:05.717 "num_queues": 4, 00:14:05.717 "bdev_name": "Malloc0" 00:14:05.717 }, 00:14:05.717 { 00:14:05.717 "ublk_device": "/dev/ublkb1", 00:14:05.717 "id": 1, 00:14:05.717 "queue_depth": 512, 00:14:05.717 "num_queues": 4, 00:14:05.717 "bdev_name": "Malloc1" 00:14:05.717 }, 00:14:05.717 { 00:14:05.717 "ublk_device": "/dev/ublkb2", 00:14:05.717 "id": 2, 00:14:05.717 "queue_depth": 512, 00:14:05.717 "num_queues": 4, 00:14:05.717 "bdev_name": "Malloc2" 00:14:05.717 }, 00:14:05.717 { 00:14:05.717 "ublk_device": "/dev/ublkb3", 00:14:05.717 "id": 3, 00:14:05.717 "queue_depth": 512, 00:14:05.717 "num_queues": 4, 00:14:05.717 "bdev_name": "Malloc3" 00:14:05.717 } 00:14:05.717 ]' 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.717 23:58:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:05.717 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.977 [2024-11-18 23:58:12.486211] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:05.977 [2024-11-18 23:58:12.519711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:05.977 [2024-11-18 23:58:12.520849] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:05.977 [2024-11-18 23:58:12.530226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:05.977 [2024-11-18 23:58:12.530475] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:05.977 [2024-11-18 23:58:12.530502] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.977 [2024-11-18 23:58:12.546192] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:05.977 [2024-11-18 23:58:12.583725] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:05.977 [2024-11-18 23:58:12.584779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:05.977 [2024-11-18 23:58:12.594153] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:05.977 [2024-11-18 23:58:12.594397] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:05.977 [2024-11-18 23:58:12.594411] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.977 [2024-11-18 23:58:12.610222] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:05.977 [2024-11-18 23:58:12.646188] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:05.977 [2024-11-18 23:58:12.646896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:05.977 [2024-11-18 23:58:12.654151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:05.977 [2024-11-18 23:58:12.654379] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:05.977 [2024-11-18 23:58:12.654391] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.977 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.235 [2024-11-18 23:58:12.670208] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.235 [2024-11-18 23:58:12.699684] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.235 [2024-11-18 23:58:12.700630] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.235 [2024-11-18 23:58:12.710153] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.235 [2024-11-18 23:58:12.710387] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:06.235 [2024-11-18 23:58:12.710399] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:06.235 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.235 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:06.235 [2024-11-18 23:58:12.905200] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:06.235 [2024-11-18 23:58:12.913141] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:06.235 [2024-11-18 23:58:12.913170] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:06.494 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:06.494 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.494 23:58:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:06.494 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.494 23:58:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.752 23:58:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.752 23:58:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.752 23:58:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:06.752 23:58:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.752 23:58:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.011 23:58:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.011 23:58:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.011 23:58:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:07.011 23:58:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.011 23:58:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.269 23:58:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.269 23:58:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.269 23:58:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:07.269 23:58:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.269 23:58:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:07.527 ************************************ 00:14:07.527 END TEST test_create_multi_ublk 00:14:07.527 ************************************ 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:07.527 00:14:07.527 real 0m3.274s 00:14:07.527 user 0m0.803s 00:14:07.527 sys 0m0.152s 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.527 23:58:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.527 23:58:14 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:07.527 23:58:14 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:07.527 23:58:14 ublk -- ublk/ublk.sh@130 -- # killprocess 70883 00:14:07.527 23:58:14 ublk -- common/autotest_common.sh@954 -- # '[' -z 70883 ']' 00:14:07.527 23:58:14 ublk -- common/autotest_common.sh@958 -- # kill -0 70883 00:14:07.527 23:58:14 ublk -- common/autotest_common.sh@959 -- # uname 00:14:07.527 23:58:14 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.527 23:58:14 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70883 00:14:07.527 killing process with pid 70883 00:14:07.527 23:58:14 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.528 23:58:14 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.528 23:58:14 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70883' 00:14:07.528 23:58:14 ublk -- common/autotest_common.sh@973 -- # kill 70883 00:14:07.528 23:58:14 ublk -- common/autotest_common.sh@978 -- # wait 70883 00:14:08.094 [2024-11-18 23:58:14.773770] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:08.094 [2024-11-18 23:58:14.773819] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:09.028 00:14:09.028 real 0m24.681s 00:14:09.028 user 0m35.101s 00:14:09.028 sys 0m9.594s 00:14:09.028 ************************************ 00:14:09.028 END TEST ublk 00:14:09.028 ************************************ 00:14:09.028 23:58:15 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.028 23:58:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.028 23:58:15 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:09.028 23:58:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:09.028 23:58:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.028 23:58:15 -- common/autotest_common.sh@10 -- # set +x 00:14:09.028 ************************************ 00:14:09.028 START TEST ublk_recovery 00:14:09.028 ************************************ 00:14:09.028 23:58:15 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:09.028 * Looking for test storage... 00:14:09.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:09.028 23:58:15 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:09.028 23:58:15 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:09.028 23:58:15 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:09.028 23:58:15 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:09.028 23:58:15 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.029 23:58:15 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:09.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.029 --rc genhtml_branch_coverage=1 00:14:09.029 --rc genhtml_function_coverage=1 00:14:09.029 --rc genhtml_legend=1 00:14:09.029 --rc geninfo_all_blocks=1 00:14:09.029 --rc geninfo_unexecuted_blocks=1 00:14:09.029 00:14:09.029 ' 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:09.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.029 --rc genhtml_branch_coverage=1 00:14:09.029 --rc genhtml_function_coverage=1 00:14:09.029 --rc genhtml_legend=1 00:14:09.029 --rc geninfo_all_blocks=1 00:14:09.029 --rc geninfo_unexecuted_blocks=1 00:14:09.029 00:14:09.029 ' 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:09.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.029 --rc genhtml_branch_coverage=1 00:14:09.029 --rc genhtml_function_coverage=1 00:14:09.029 --rc genhtml_legend=1 00:14:09.029 --rc geninfo_all_blocks=1 00:14:09.029 --rc geninfo_unexecuted_blocks=1 00:14:09.029 00:14:09.029 ' 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:09.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.029 --rc genhtml_branch_coverage=1 00:14:09.029 --rc genhtml_function_coverage=1 00:14:09.029 --rc genhtml_legend=1 00:14:09.029 --rc geninfo_all_blocks=1 00:14:09.029 --rc geninfo_unexecuted_blocks=1 00:14:09.029 00:14:09.029 ' 00:14:09.029 23:58:15 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:09.029 23:58:15 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:09.029 23:58:15 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:09.029 23:58:15 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:09.029 23:58:15 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:09.029 23:58:15 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:09.029 23:58:15 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:09.029 23:58:15 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:09.029 23:58:15 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:09.029 23:58:15 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:09.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.029 23:58:15 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71268 00:14:09.029 23:58:15 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.029 23:58:15 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71268 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71268 ']' 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.029 23:58:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.029 23:58:15 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:09.287 [2024-11-18 23:58:15.734784] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:09.287 [2024-11-18 23:58:15.734901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71268 ] 00:14:09.287 [2024-11-18 23:58:15.889750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.546 [2024-11-18 23:58:15.982287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.546 [2024-11-18 23:58:15.982343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:10.114 23:58:16 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.114 [2024-11-18 23:58:16.524148] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:10.114 [2024-11-18 23:58:16.525858] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.114 23:58:16 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.114 malloc0 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.114 23:58:16 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.114 [2024-11-18 23:58:16.620264] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:10.114 [2024-11-18 23:58:16.620354] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:10.114 [2024-11-18 23:58:16.620364] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:10.114 [2024-11-18 23:58:16.620372] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:10.114 [2024-11-18 23:58:16.628293] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:10.114 [2024-11-18 23:58:16.628312] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:10.114 [2024-11-18 23:58:16.636153] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:10.114 [2024-11-18 23:58:16.636280] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:10.114 [2024-11-18 23:58:16.651158] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:10.114 1 00:14:10.114 23:58:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.114 23:58:16 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:11.048 23:58:17 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71299 00:14:11.048 23:58:17 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:11.048 23:58:17 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:11.306 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:11.306 fio-3.35 00:14:11.306 Starting 1 process 00:14:16.568 23:58:22 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71268 00:14:16.568 23:58:22 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:21.852 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71268 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:21.852 23:58:27 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71414 00:14:21.852 23:58:27 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:21.852 23:58:27 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:21.852 23:58:27 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71414 00:14:21.852 23:58:27 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71414 ']' 00:14:21.852 23:58:27 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.852 23:58:27 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.852 23:58:27 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.852 23:58:27 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.852 23:58:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.852 [2024-11-18 23:58:27.751934] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:21.852 [2024-11-18 23:58:27.752225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71414 ] 00:14:21.852 [2024-11-18 23:58:27.912049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:21.852 [2024-11-18 23:58:28.034915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.852 [2024-11-18 23:58:28.034940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.113 23:58:28 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.113 23:58:28 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:22.113 23:58:28 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:22.113 23:58:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.113 23:58:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.113 [2024-11-18 23:58:28.701152] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:22.113 [2024-11-18 23:58:28.703207] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:22.113 23:58:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.113 23:58:28 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:22.113 23:58:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.113 23:58:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.374 malloc0 00:14:22.374 23:58:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.374 23:58:28 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:22.374 23:58:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.374 23:58:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.374 [2024-11-18 23:58:28.812253] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:22.374 [2024-11-18 23:58:28.812292] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:22.374 [2024-11-18 23:58:28.812303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:22.374 [2024-11-18 23:58:28.821151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:22.374 [2024-11-18 23:58:28.821182] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:22.374 1 00:14:22.374 23:58:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.374 23:58:28 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71299 00:14:23.315 [2024-11-18 23:58:29.821215] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:23.315 [2024-11-18 23:58:29.826159] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:23.315 [2024-11-18 23:58:29.826176] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:24.250 [2024-11-18 23:58:30.826205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:24.250 [2024-11-18 23:58:30.830144] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:24.250 [2024-11-18 23:58:30.830162] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:25.184 [2024-11-18 23:58:31.830182] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:25.184 [2024-11-18 23:58:31.831169] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:25.184 [2024-11-18 23:58:31.831176] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:25.184 [2024-11-18 23:58:31.831184] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:25.184 [2024-11-18 23:58:31.831262] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:47.132 [2024-11-18 23:58:52.881159] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:47.132 [2024-11-18 23:58:52.888140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:47.132 [2024-11-18 23:58:52.896399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:47.132 [2024-11-18 23:58:52.896418] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:13.734 00:15:13.734 fio_test: (groupid=0, jobs=1): err= 0: pid=71302: Mon Nov 18 23:59:17 2024 00:15:13.734 read: IOPS=13.6k, BW=53.1MiB/s (55.7MB/s)(3188MiB/60002msec) 00:15:13.734 slat (nsec): min=1206, max=171056, avg=5463.01, stdev=1345.97 00:15:13.734 clat (usec): min=716, max=30241k, avg=4287.42, stdev=245957.19 00:15:13.734 lat (usec): min=723, max=30241k, avg=4292.89, stdev=245957.19 00:15:13.734 clat percentiles (usec): 00:15:13.734 | 1.00th=[ 1844], 5.00th=[ 1958], 10.00th=[ 1991], 20.00th=[ 2057], 00:15:13.734 | 30.00th=[ 2089], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2147], 00:15:13.734 | 70.00th=[ 2180], 80.00th=[ 2180], 90.00th=[ 2278], 95.00th=[ 3490], 00:15:13.734 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 8291], 99.95th=[12125], 00:15:13.734 | 99.99th=[13304] 00:15:13.734 bw ( KiB/s): min=53672, max=121720, per=100.00%, avg=108952.00, stdev=15044.99, samples=59 00:15:13.734 iops : min=13418, max=30430, avg=27238.00, stdev=3761.25, samples=59 00:15:13.734 write: IOPS=13.6k, BW=53.1MiB/s (55.6MB/s)(3184MiB/60002msec); 0 zone resets 00:15:13.734 slat (nsec): min=1241, max=149360, avg=5640.40, stdev=1319.80 00:15:13.734 clat (usec): min=720, max=30241k, avg=5117.35, stdev=288090.82 00:15:13.734 lat (usec): min=725, max=30241k, avg=5122.99, stdev=288090.81 00:15:13.734 clat percentiles (usec): 00:15:13.734 | 1.00th=[ 1909], 5.00th=[ 2040], 10.00th=[ 2089], 20.00th=[ 2147], 00:15:13.734 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[ 2245], 60.00th=[ 2245], 00:15:13.734 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2343], 95.00th=[ 3425], 00:15:13.734 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 8455], 99.95th=[12256], 00:15:13.734 | 99.99th=[13435] 00:15:13.734 bw ( KiB/s): min=53944, max=121744, per=100.00%, avg=108778.31, stdev=15187.92, samples=59 00:15:13.734 iops : min=13486, max=30436, avg=27194.58, stdev=3796.98, samples=59 00:15:13.734 lat (usec) : 750=0.01%, 1000=0.01% 00:15:13.734 lat (msec) : 2=6.75%, 4=89.44%, 10=3.75%, 20=0.05%, >=2000=0.01% 00:15:13.734 cpu : usr=3.02%, sys=15.51%, ctx=53338, majf=0, minf=13 00:15:13.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:13.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:13.734 issued rwts: total=816100,815135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:13.734 00:15:13.734 Run status group 0 (all jobs): 00:15:13.734 READ: bw=53.1MiB/s (55.7MB/s), 53.1MiB/s-53.1MiB/s (55.7MB/s-55.7MB/s), io=3188MiB (3343MB), run=60002-60002msec 00:15:13.734 WRITE: bw=53.1MiB/s (55.6MB/s), 53.1MiB/s-53.1MiB/s (55.6MB/s-55.6MB/s), io=3184MiB (3339MB), run=60002-60002msec 00:15:13.734 00:15:13.734 Disk stats (read/write): 00:15:13.734 ublkb1: ios=813029/812094, merge=0/0, ticks=3446611/4048612, in_queue=7495224, util=99.91% 00:15:13.734 23:59:17 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:13.734 23:59:17 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.734 23:59:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.734 [2024-11-18 23:59:17.916076] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:13.734 [2024-11-18 23:59:17.949271] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:13.734 [2024-11-18 23:59:17.949434] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:13.734 [2024-11-18 23:59:17.956151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:13.734 [2024-11-18 23:59:17.956257] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:13.734 [2024-11-18 23:59:17.956264] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:13.734 23:59:17 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.734 23:59:17 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:13.734 23:59:17 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.734 23:59:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.734 [2024-11-18 23:59:17.971254] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:13.734 [2024-11-18 23:59:17.980138] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:13.735 [2024-11-18 23:59:17.980173] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:13.735 23:59:17 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.735 23:59:17 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:13.735 23:59:17 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:13.735 23:59:17 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71414 00:15:13.735 23:59:17 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 71414 ']' 00:15:13.735 23:59:17 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 71414 00:15:13.735 23:59:17 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:15:13.735 23:59:17 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.735 23:59:17 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71414 00:15:13.735 killing process with pid 71414 00:15:13.735 23:59:18 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.735 23:59:18 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.735 23:59:18 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71414' 00:15:13.735 23:59:18 ublk_recovery -- common/autotest_common.sh@973 -- # kill 71414 00:15:13.735 23:59:18 ublk_recovery -- common/autotest_common.sh@978 -- # wait 71414 00:15:13.735 [2024-11-18 23:59:19.176117] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:13.735 [2024-11-18 23:59:19.176183] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:13.735 ************************************ 00:15:13.735 END TEST ublk_recovery 00:15:13.735 ************************************ 00:15:13.735 00:15:13.735 real 1m4.727s 00:15:13.735 user 1m44.581s 00:15:13.735 sys 0m25.143s 00:15:13.735 23:59:20 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.735 23:59:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.735 23:59:20 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:15:13.735 23:59:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:13.735 23:59:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:13.735 23:59:20 -- common/autotest_common.sh@10 -- # set +x 00:15:13.735 23:59:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:13.735 23:59:20 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:13.735 23:59:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:13.735 23:59:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.735 23:59:20 -- common/autotest_common.sh@10 -- # set +x 00:15:13.735 ************************************ 00:15:13.735 START TEST ftl 00:15:13.735 ************************************ 00:15:13.735 23:59:20 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:13.735 * Looking for test storage... 00:15:13.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:13.735 23:59:20 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:13.735 23:59:20 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:15:13.735 23:59:20 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:13.997 23:59:20 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:13.997 23:59:20 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.997 23:59:20 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.997 23:59:20 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.997 23:59:20 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.997 23:59:20 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.997 23:59:20 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.997 23:59:20 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.997 23:59:20 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.997 23:59:20 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.997 23:59:20 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.997 23:59:20 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.997 23:59:20 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:13.997 23:59:20 ftl -- scripts/common.sh@345 -- # : 1 00:15:13.997 23:59:20 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.997 23:59:20 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.997 23:59:20 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:13.997 23:59:20 ftl -- scripts/common.sh@353 -- # local d=1 00:15:13.997 23:59:20 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.997 23:59:20 ftl -- scripts/common.sh@355 -- # echo 1 00:15:13.997 23:59:20 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.997 23:59:20 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:13.997 23:59:20 ftl -- scripts/common.sh@353 -- # local d=2 00:15:13.997 23:59:20 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.997 23:59:20 ftl -- scripts/common.sh@355 -- # echo 2 00:15:13.997 23:59:20 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.997 23:59:20 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.997 23:59:20 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.997 23:59:20 ftl -- scripts/common.sh@368 -- # return 0 00:15:13.997 23:59:20 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.997 23:59:20 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.997 --rc genhtml_branch_coverage=1 00:15:13.997 --rc genhtml_function_coverage=1 00:15:13.997 --rc genhtml_legend=1 00:15:13.997 --rc geninfo_all_blocks=1 00:15:13.997 --rc geninfo_unexecuted_blocks=1 00:15:13.997 00:15:13.997 ' 00:15:13.997 23:59:20 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.997 --rc genhtml_branch_coverage=1 00:15:13.997 --rc genhtml_function_coverage=1 00:15:13.997 --rc genhtml_legend=1 00:15:13.997 --rc geninfo_all_blocks=1 00:15:13.997 --rc geninfo_unexecuted_blocks=1 00:15:13.997 00:15:13.997 ' 00:15:13.997 23:59:20 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.997 --rc genhtml_branch_coverage=1 00:15:13.997 --rc genhtml_function_coverage=1 00:15:13.997 --rc genhtml_legend=1 00:15:13.997 --rc geninfo_all_blocks=1 00:15:13.997 --rc geninfo_unexecuted_blocks=1 00:15:13.997 00:15:13.997 ' 00:15:13.997 23:59:20 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.997 --rc genhtml_branch_coverage=1 00:15:13.997 --rc genhtml_function_coverage=1 00:15:13.997 --rc genhtml_legend=1 00:15:13.997 --rc geninfo_all_blocks=1 00:15:13.997 --rc geninfo_unexecuted_blocks=1 00:15:13.997 00:15:13.997 ' 00:15:13.997 23:59:20 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:13.997 23:59:20 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:13.997 23:59:20 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:13.997 23:59:20 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:13.997 23:59:20 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:13.997 23:59:20 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:13.997 23:59:20 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.997 23:59:20 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:13.997 23:59:20 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:13.997 23:59:20 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.997 23:59:20 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.997 23:59:20 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:13.997 23:59:20 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:13.997 23:59:20 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:13.997 23:59:20 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:13.997 23:59:20 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:13.997 23:59:20 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:13.997 23:59:20 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.997 23:59:20 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.997 23:59:20 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:13.997 23:59:20 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:13.997 23:59:20 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:13.997 23:59:20 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:13.997 23:59:20 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:13.997 23:59:20 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:13.997 23:59:20 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:13.997 23:59:20 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:13.997 23:59:20 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:13.997 23:59:20 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:13.997 23:59:20 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.997 23:59:20 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:13.997 23:59:20 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:13.997 23:59:20 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:13.997 23:59:20 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:13.997 23:59:20 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:14.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:14.257 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:14.257 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:14.257 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:14.257 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:14.257 23:59:20 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72226 00:15:14.257 23:59:20 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:14.257 23:59:20 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72226 00:15:14.257 23:59:20 ftl -- common/autotest_common.sh@835 -- # '[' -z 72226 ']' 00:15:14.257 23:59:20 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.257 23:59:20 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.257 23:59:20 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.257 23:59:20 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.257 23:59:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:14.516 [2024-11-18 23:59:21.014871] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:14.516 [2024-11-18 23:59:21.015280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72226 ] 00:15:14.516 [2024-11-18 23:59:21.177746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.775 [2024-11-18 23:59:21.269482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.341 23:59:21 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.341 23:59:21 ftl -- common/autotest_common.sh@868 -- # return 0 00:15:15.341 23:59:21 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:15.341 23:59:21 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:16.275 23:59:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:16.275 23:59:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:16.533 23:59:23 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:16.533 23:59:23 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:16.533 23:59:23 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:16.791 23:59:23 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:16.791 23:59:23 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:16.791 23:59:23 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:16.791 23:59:23 ftl -- ftl/ftl.sh@50 -- # break 00:15:16.791 23:59:23 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:16.791 23:59:23 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:16.791 23:59:23 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:16.791 23:59:23 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:17.050 23:59:23 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:17.050 23:59:23 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:17.050 23:59:23 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:17.050 23:59:23 ftl -- ftl/ftl.sh@63 -- # break 00:15:17.050 23:59:23 ftl -- ftl/ftl.sh@66 -- # killprocess 72226 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@954 -- # '[' -z 72226 ']' 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@958 -- # kill -0 72226 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@959 -- # uname 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72226 00:15:17.050 killing process with pid 72226 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72226' 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@973 -- # kill 72226 00:15:17.050 23:59:23 ftl -- common/autotest_common.sh@978 -- # wait 72226 00:15:18.429 23:59:24 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:18.429 23:59:24 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:18.429 23:59:24 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:18.429 23:59:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.429 23:59:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:18.429 ************************************ 00:15:18.429 START TEST ftl_fio_basic 00:15:18.429 ************************************ 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:18.429 * Looking for test storage... 00:15:18.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:18.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.429 --rc genhtml_branch_coverage=1 00:15:18.429 --rc genhtml_function_coverage=1 00:15:18.429 --rc genhtml_legend=1 00:15:18.429 --rc geninfo_all_blocks=1 00:15:18.429 --rc geninfo_unexecuted_blocks=1 00:15:18.429 00:15:18.429 ' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:18.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.429 --rc genhtml_branch_coverage=1 00:15:18.429 --rc genhtml_function_coverage=1 00:15:18.429 --rc genhtml_legend=1 00:15:18.429 --rc geninfo_all_blocks=1 00:15:18.429 --rc geninfo_unexecuted_blocks=1 00:15:18.429 00:15:18.429 ' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:18.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.429 --rc genhtml_branch_coverage=1 00:15:18.429 --rc genhtml_function_coverage=1 00:15:18.429 --rc genhtml_legend=1 00:15:18.429 --rc geninfo_all_blocks=1 00:15:18.429 --rc geninfo_unexecuted_blocks=1 00:15:18.429 00:15:18.429 ' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:18.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.429 --rc genhtml_branch_coverage=1 00:15:18.429 --rc genhtml_function_coverage=1 00:15:18.429 --rc genhtml_legend=1 00:15:18.429 --rc geninfo_all_blocks=1 00:15:18.429 --rc geninfo_unexecuted_blocks=1 00:15:18.429 00:15:18.429 ' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.429 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72353 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72353 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 72353 ']' 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.430 23:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:18.430 [2024-11-18 23:59:25.035582] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:18.430 [2024-11-18 23:59:25.036265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72353 ] 00:15:18.689 [2024-11-18 23:59:25.193520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.689 [2024-11-18 23:59:25.287563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.689 [2024-11-18 23:59:25.287795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.689 [2024-11-18 23:59:25.287805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.256 23:59:25 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.256 23:59:25 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:15:19.256 23:59:25 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:19.256 23:59:25 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:19.256 23:59:25 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:19.256 23:59:25 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:19.256 23:59:25 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:19.256 23:59:25 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:19.515 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:19.515 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:19.515 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:19.515 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:19.515 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:19.515 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:19.515 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:19.515 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:19.774 { 00:15:19.774 "name": "nvme0n1", 00:15:19.774 "aliases": [ 00:15:19.774 "87f9327f-da6f-40ca-b16e-26cf1ec46a15" 00:15:19.774 ], 00:15:19.774 "product_name": "NVMe disk", 00:15:19.774 "block_size": 4096, 00:15:19.774 "num_blocks": 1310720, 00:15:19.774 "uuid": "87f9327f-da6f-40ca-b16e-26cf1ec46a15", 00:15:19.774 "numa_id": -1, 00:15:19.774 "assigned_rate_limits": { 00:15:19.774 "rw_ios_per_sec": 0, 00:15:19.774 "rw_mbytes_per_sec": 0, 00:15:19.774 "r_mbytes_per_sec": 0, 00:15:19.774 "w_mbytes_per_sec": 0 00:15:19.774 }, 00:15:19.774 "claimed": false, 00:15:19.774 "zoned": false, 00:15:19.774 "supported_io_types": { 00:15:19.774 "read": true, 00:15:19.774 "write": true, 00:15:19.774 "unmap": true, 00:15:19.774 "flush": true, 00:15:19.774 "reset": true, 00:15:19.774 "nvme_admin": true, 00:15:19.774 "nvme_io": true, 00:15:19.774 "nvme_io_md": false, 00:15:19.774 "write_zeroes": true, 00:15:19.774 "zcopy": false, 00:15:19.774 "get_zone_info": false, 00:15:19.774 "zone_management": false, 00:15:19.774 "zone_append": false, 00:15:19.774 "compare": true, 00:15:19.774 "compare_and_write": false, 00:15:19.774 "abort": true, 00:15:19.774 "seek_hole": false, 00:15:19.774 "seek_data": false, 00:15:19.774 "copy": true, 00:15:19.774 "nvme_iov_md": false 00:15:19.774 }, 00:15:19.774 "driver_specific": { 00:15:19.774 "nvme": [ 00:15:19.774 { 00:15:19.774 "pci_address": "0000:00:11.0", 00:15:19.774 "trid": { 00:15:19.774 "trtype": "PCIe", 00:15:19.774 "traddr": "0000:00:11.0" 00:15:19.774 }, 00:15:19.774 "ctrlr_data": { 00:15:19.774 "cntlid": 0, 00:15:19.774 "vendor_id": "0x1b36", 00:15:19.774 "model_number": "QEMU NVMe Ctrl", 00:15:19.774 "serial_number": "12341", 00:15:19.774 "firmware_revision": "8.0.0", 00:15:19.774 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:19.774 "oacs": { 00:15:19.774 "security": 0, 00:15:19.774 "format": 1, 00:15:19.774 "firmware": 0, 00:15:19.774 "ns_manage": 1 00:15:19.774 }, 00:15:19.774 "multi_ctrlr": false, 00:15:19.774 "ana_reporting": false 00:15:19.774 }, 00:15:19.774 "vs": { 00:15:19.774 "nvme_version": "1.4" 00:15:19.774 }, 00:15:19.774 "ns_data": { 00:15:19.774 "id": 1, 00:15:19.774 "can_share": false 00:15:19.774 } 00:15:19.774 } 00:15:19.774 ], 00:15:19.774 "mp_policy": "active_passive" 00:15:19.774 } 00:15:19.774 } 00:15:19.774 ]' 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:19.774 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:20.032 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:20.032 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:20.290 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e7be64bb-6b0f-4758-872d-cb2e1b5fca2b 00:15:20.290 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e7be64bb-6b0f-4758-872d-cb2e1b5fca2b 00:15:20.548 23:59:26 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=9c971600-509c-402c-8547-508c8735326a 00:15:20.548 23:59:26 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9c971600-509c-402c-8547-508c8735326a 00:15:20.548 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:20.548 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:20.548 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=9c971600-509c-402c-8547-508c8735326a 00:15:20.548 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:20.549 23:59:26 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 9c971600-509c-402c-8547-508c8735326a 00:15:20.549 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9c971600-509c-402c-8547-508c8735326a 00:15:20.549 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:20.549 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:20.549 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:20.549 23:59:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9c971600-509c-402c-8547-508c8735326a 00:15:20.549 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:20.549 { 00:15:20.549 "name": "9c971600-509c-402c-8547-508c8735326a", 00:15:20.549 "aliases": [ 00:15:20.549 "lvs/nvme0n1p0" 00:15:20.549 ], 00:15:20.549 "product_name": "Logical Volume", 00:15:20.549 "block_size": 4096, 00:15:20.549 "num_blocks": 26476544, 00:15:20.549 "uuid": "9c971600-509c-402c-8547-508c8735326a", 00:15:20.549 "assigned_rate_limits": { 00:15:20.549 "rw_ios_per_sec": 0, 00:15:20.549 "rw_mbytes_per_sec": 0, 00:15:20.549 "r_mbytes_per_sec": 0, 00:15:20.549 "w_mbytes_per_sec": 0 00:15:20.549 }, 00:15:20.549 "claimed": false, 00:15:20.549 "zoned": false, 00:15:20.549 "supported_io_types": { 00:15:20.549 "read": true, 00:15:20.549 "write": true, 00:15:20.549 "unmap": true, 00:15:20.549 "flush": false, 00:15:20.549 "reset": true, 00:15:20.549 "nvme_admin": false, 00:15:20.549 "nvme_io": false, 00:15:20.549 "nvme_io_md": false, 00:15:20.549 "write_zeroes": true, 00:15:20.549 "zcopy": false, 00:15:20.549 "get_zone_info": false, 00:15:20.549 "zone_management": false, 00:15:20.549 "zone_append": false, 00:15:20.549 "compare": false, 00:15:20.549 "compare_and_write": false, 00:15:20.549 "abort": false, 00:15:20.549 "seek_hole": true, 00:15:20.549 "seek_data": true, 00:15:20.549 "copy": false, 00:15:20.549 "nvme_iov_md": false 00:15:20.549 }, 00:15:20.549 "driver_specific": { 00:15:20.549 "lvol": { 00:15:20.549 "lvol_store_uuid": "e7be64bb-6b0f-4758-872d-cb2e1b5fca2b", 00:15:20.549 "base_bdev": "nvme0n1", 00:15:20.549 "thin_provision": true, 00:15:20.549 "num_allocated_clusters": 0, 00:15:20.549 "snapshot": false, 00:15:20.549 "clone": false, 00:15:20.549 "esnap_clone": false 00:15:20.549 } 00:15:20.549 } 00:15:20.549 } 00:15:20.549 ]' 00:15:20.549 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:20.549 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:20.549 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:20.807 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:20.807 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:20.807 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:20.807 23:59:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:20.807 23:59:27 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:20.807 23:59:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 9c971600-509c-402c-8547-508c8735326a 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9c971600-509c-402c-8547-508c8735326a 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9c971600-509c-402c-8547-508c8735326a 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:21.066 { 00:15:21.066 "name": "9c971600-509c-402c-8547-508c8735326a", 00:15:21.066 "aliases": [ 00:15:21.066 "lvs/nvme0n1p0" 00:15:21.066 ], 00:15:21.066 "product_name": "Logical Volume", 00:15:21.066 "block_size": 4096, 00:15:21.066 "num_blocks": 26476544, 00:15:21.066 "uuid": "9c971600-509c-402c-8547-508c8735326a", 00:15:21.066 "assigned_rate_limits": { 00:15:21.066 "rw_ios_per_sec": 0, 00:15:21.066 "rw_mbytes_per_sec": 0, 00:15:21.066 "r_mbytes_per_sec": 0, 00:15:21.066 "w_mbytes_per_sec": 0 00:15:21.066 }, 00:15:21.066 "claimed": false, 00:15:21.066 "zoned": false, 00:15:21.066 "supported_io_types": { 00:15:21.066 "read": true, 00:15:21.066 "write": true, 00:15:21.066 "unmap": true, 00:15:21.066 "flush": false, 00:15:21.066 "reset": true, 00:15:21.066 "nvme_admin": false, 00:15:21.066 "nvme_io": false, 00:15:21.066 "nvme_io_md": false, 00:15:21.066 "write_zeroes": true, 00:15:21.066 "zcopy": false, 00:15:21.066 "get_zone_info": false, 00:15:21.066 "zone_management": false, 00:15:21.066 "zone_append": false, 00:15:21.066 "compare": false, 00:15:21.066 "compare_and_write": false, 00:15:21.066 "abort": false, 00:15:21.066 "seek_hole": true, 00:15:21.066 "seek_data": true, 00:15:21.066 "copy": false, 00:15:21.066 "nvme_iov_md": false 00:15:21.066 }, 00:15:21.066 "driver_specific": { 00:15:21.066 "lvol": { 00:15:21.066 "lvol_store_uuid": "e7be64bb-6b0f-4758-872d-cb2e1b5fca2b", 00:15:21.066 "base_bdev": "nvme0n1", 00:15:21.066 "thin_provision": true, 00:15:21.066 "num_allocated_clusters": 0, 00:15:21.066 "snapshot": false, 00:15:21.066 "clone": false, 00:15:21.066 "esnap_clone": false 00:15:21.066 } 00:15:21.066 } 00:15:21.066 } 00:15:21.066 ]' 00:15:21.066 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:21.324 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 9c971600-509c-402c-8547-508c8735326a 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9c971600-509c-402c-8547-508c8735326a 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:21.324 23:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9c971600-509c-402c-8547-508c8735326a 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:21.583 { 00:15:21.583 "name": "9c971600-509c-402c-8547-508c8735326a", 00:15:21.583 "aliases": [ 00:15:21.583 "lvs/nvme0n1p0" 00:15:21.583 ], 00:15:21.583 "product_name": "Logical Volume", 00:15:21.583 "block_size": 4096, 00:15:21.583 "num_blocks": 26476544, 00:15:21.583 "uuid": "9c971600-509c-402c-8547-508c8735326a", 00:15:21.583 "assigned_rate_limits": { 00:15:21.583 "rw_ios_per_sec": 0, 00:15:21.583 "rw_mbytes_per_sec": 0, 00:15:21.583 "r_mbytes_per_sec": 0, 00:15:21.583 "w_mbytes_per_sec": 0 00:15:21.583 }, 00:15:21.583 "claimed": false, 00:15:21.583 "zoned": false, 00:15:21.583 "supported_io_types": { 00:15:21.583 "read": true, 00:15:21.583 "write": true, 00:15:21.583 "unmap": true, 00:15:21.583 "flush": false, 00:15:21.583 "reset": true, 00:15:21.583 "nvme_admin": false, 00:15:21.583 "nvme_io": false, 00:15:21.583 "nvme_io_md": false, 00:15:21.583 "write_zeroes": true, 00:15:21.583 "zcopy": false, 00:15:21.583 "get_zone_info": false, 00:15:21.583 "zone_management": false, 00:15:21.583 "zone_append": false, 00:15:21.583 "compare": false, 00:15:21.583 "compare_and_write": false, 00:15:21.583 "abort": false, 00:15:21.583 "seek_hole": true, 00:15:21.583 "seek_data": true, 00:15:21.583 "copy": false, 00:15:21.583 "nvme_iov_md": false 00:15:21.583 }, 00:15:21.583 "driver_specific": { 00:15:21.583 "lvol": { 00:15:21.583 "lvol_store_uuid": "e7be64bb-6b0f-4758-872d-cb2e1b5fca2b", 00:15:21.583 "base_bdev": "nvme0n1", 00:15:21.583 "thin_provision": true, 00:15:21.583 "num_allocated_clusters": 0, 00:15:21.583 "snapshot": false, 00:15:21.583 "clone": false, 00:15:21.583 "esnap_clone": false 00:15:21.583 } 00:15:21.583 } 00:15:21.583 } 00:15:21.583 ]' 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:21.583 23:59:28 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9c971600-509c-402c-8547-508c8735326a -c nvc0n1p0 --l2p_dram_limit 60 00:15:21.843 [2024-11-18 23:59:28.443399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.443552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:21.843 [2024-11-18 23:59:28.443572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:21.843 [2024-11-18 23:59:28.443580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.843 [2024-11-18 23:59:28.443630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.443640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:21.843 [2024-11-18 23:59:28.443649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:21.843 [2024-11-18 23:59:28.443655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.843 [2024-11-18 23:59:28.443689] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:21.843 [2024-11-18 23:59:28.444246] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:21.843 [2024-11-18 23:59:28.444264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.444271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:21.843 [2024-11-18 23:59:28.444280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:15:21.843 [2024-11-18 23:59:28.444287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.843 [2024-11-18 23:59:28.444383] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 97ea3909-2ed8-4931-b1e7-7c7da7c42f8b 00:15:21.843 [2024-11-18 23:59:28.445658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.445690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:21.843 [2024-11-18 23:59:28.445698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:15:21.843 [2024-11-18 23:59:28.445706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.843 [2024-11-18 23:59:28.452467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.452496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:21.843 [2024-11-18 23:59:28.452504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.698 ms 00:15:21.843 [2024-11-18 23:59:28.452511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.843 [2024-11-18 23:59:28.452601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.452613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:21.843 [2024-11-18 23:59:28.452619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:21.843 [2024-11-18 23:59:28.452630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.843 [2024-11-18 23:59:28.452675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.452685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:21.843 [2024-11-18 23:59:28.452692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:21.843 [2024-11-18 23:59:28.452699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.843 [2024-11-18 23:59:28.452725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:21.843 [2024-11-18 23:59:28.455984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.456010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:21.843 [2024-11-18 23:59:28.456021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:15:21.843 [2024-11-18 23:59:28.456030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.843 [2024-11-18 23:59:28.456069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.456077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:21.843 [2024-11-18 23:59:28.456085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:21.843 [2024-11-18 23:59:28.456090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.843 [2024-11-18 23:59:28.456138] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:21.843 [2024-11-18 23:59:28.456262] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:21.843 [2024-11-18 23:59:28.456276] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:21.843 [2024-11-18 23:59:28.456295] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:21.843 [2024-11-18 23:59:28.456305] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:21.843 [2024-11-18 23:59:28.456312] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:21.843 [2024-11-18 23:59:28.456320] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:21.843 [2024-11-18 23:59:28.456327] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:21.843 [2024-11-18 23:59:28.456334] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:21.843 [2024-11-18 23:59:28.456339] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:21.843 [2024-11-18 23:59:28.456347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.843 [2024-11-18 23:59:28.456356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:21.844 [2024-11-18 23:59:28.456364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:15:21.844 [2024-11-18 23:59:28.456370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.844 [2024-11-18 23:59:28.456443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.844 [2024-11-18 23:59:28.456451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:21.844 [2024-11-18 23:59:28.456459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:21.844 [2024-11-18 23:59:28.456465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.844 [2024-11-18 23:59:28.456562] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:21.844 [2024-11-18 23:59:28.456570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:21.844 [2024-11-18 23:59:28.456581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:21.844 [2024-11-18 23:59:28.456587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:21.844 [2024-11-18 23:59:28.456599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:21.844 [2024-11-18 23:59:28.456612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:21.844 [2024-11-18 23:59:28.456625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:21.844 [2024-11-18 23:59:28.456637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:21.844 [2024-11-18 23:59:28.456642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:21.844 [2024-11-18 23:59:28.456649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:21.844 [2024-11-18 23:59:28.456654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:21.844 [2024-11-18 23:59:28.456661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:21.844 [2024-11-18 23:59:28.456667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:21.844 [2024-11-18 23:59:28.456681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:21.844 [2024-11-18 23:59:28.456688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:21.844 [2024-11-18 23:59:28.456700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:21.844 [2024-11-18 23:59:28.456712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:21.844 [2024-11-18 23:59:28.456717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:21.844 [2024-11-18 23:59:28.456728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:21.844 [2024-11-18 23:59:28.456735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:21.844 [2024-11-18 23:59:28.456747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:21.844 [2024-11-18 23:59:28.456752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:21.844 [2024-11-18 23:59:28.456765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:21.844 [2024-11-18 23:59:28.456773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:21.844 [2024-11-18 23:59:28.456784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:21.844 [2024-11-18 23:59:28.456801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:21.844 [2024-11-18 23:59:28.456808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:21.844 [2024-11-18 23:59:28.456813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:21.844 [2024-11-18 23:59:28.456819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:21.844 [2024-11-18 23:59:28.456824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:21.844 [2024-11-18 23:59:28.456840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:21.844 [2024-11-18 23:59:28.456846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456851] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:21.844 [2024-11-18 23:59:28.456858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:21.844 [2024-11-18 23:59:28.456864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:21.844 [2024-11-18 23:59:28.456871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.844 [2024-11-18 23:59:28.456877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:21.844 [2024-11-18 23:59:28.456886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:21.844 [2024-11-18 23:59:28.456892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:21.844 [2024-11-18 23:59:28.456899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:21.844 [2024-11-18 23:59:28.456904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:21.844 [2024-11-18 23:59:28.456910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:21.844 [2024-11-18 23:59:28.456918] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:21.844 [2024-11-18 23:59:28.456927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:21.844 [2024-11-18 23:59:28.456934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:21.844 [2024-11-18 23:59:28.456941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:21.844 [2024-11-18 23:59:28.456947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:21.844 [2024-11-18 23:59:28.456954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:21.844 [2024-11-18 23:59:28.456959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:21.844 [2024-11-18 23:59:28.456966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:21.844 [2024-11-18 23:59:28.456971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:21.844 [2024-11-18 23:59:28.456978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:21.844 [2024-11-18 23:59:28.456984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:21.844 [2024-11-18 23:59:28.456994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:21.844 [2024-11-18 23:59:28.456999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:21.844 [2024-11-18 23:59:28.457006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:21.844 [2024-11-18 23:59:28.457012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:21.844 [2024-11-18 23:59:28.457020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:21.844 [2024-11-18 23:59:28.457026] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:21.844 [2024-11-18 23:59:28.457033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:21.844 [2024-11-18 23:59:28.457041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:21.844 [2024-11-18 23:59:28.457050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:21.844 [2024-11-18 23:59:28.457056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:21.844 [2024-11-18 23:59:28.457063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:21.844 [2024-11-18 23:59:28.457069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.844 [2024-11-18 23:59:28.457077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:21.844 [2024-11-18 23:59:28.457083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:15:21.844 [2024-11-18 23:59:28.457090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.844 [2024-11-18 23:59:28.457174] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:21.844 [2024-11-18 23:59:28.457187] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:24.373 [2024-11-18 23:59:30.596862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.596912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:24.373 [2024-11-18 23:59:30.596929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2139.680 ms 00:15:24.373 [2024-11-18 23:59:30.596939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.624834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.624881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:24.373 [2024-11-18 23:59:30.624895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.499 ms 00:15:24.373 [2024-11-18 23:59:30.624906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.625037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.625051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:24.373 [2024-11-18 23:59:30.625061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:24.373 [2024-11-18 23:59:30.625073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.668878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.669073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:24.373 [2024-11-18 23:59:30.669102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.732 ms 00:15:24.373 [2024-11-18 23:59:30.669116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.669187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.669201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:24.373 [2024-11-18 23:59:30.669212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:24.373 [2024-11-18 23:59:30.669225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.669706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.669729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:24.373 [2024-11-18 23:59:30.669741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:15:24.373 [2024-11-18 23:59:30.669755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.669906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.669920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:24.373 [2024-11-18 23:59:30.669931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:15:24.373 [2024-11-18 23:59:30.669945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.686898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.686930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:24.373 [2024-11-18 23:59:30.686941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.919 ms 00:15:24.373 [2024-11-18 23:59:30.686950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.699425] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:24.373 [2024-11-18 23:59:30.716459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.716503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:24.373 [2024-11-18 23:59:30.716514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.417 ms 00:15:24.373 [2024-11-18 23:59:30.716524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.768644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.768685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:24.373 [2024-11-18 23:59:30.768703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.070 ms 00:15:24.373 [2024-11-18 23:59:30.768711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.768914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.768927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:24.373 [2024-11-18 23:59:30.768939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:15:24.373 [2024-11-18 23:59:30.768947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.802296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.802488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:24.373 [2024-11-18 23:59:30.802522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.280 ms 00:15:24.373 [2024-11-18 23:59:30.802536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.829494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.829627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:24.373 [2024-11-18 23:59:30.829650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.892 ms 00:15:24.373 [2024-11-18 23:59:30.829658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.830269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.830289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:24.373 [2024-11-18 23:59:30.830300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:15:24.373 [2024-11-18 23:59:30.830308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.901974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.902015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:24.373 [2024-11-18 23:59:30.902033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.610 ms 00:15:24.373 [2024-11-18 23:59:30.902044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.926979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.927012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:24.373 [2024-11-18 23:59:30.927026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.857 ms 00:15:24.373 [2024-11-18 23:59:30.927033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.950580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.950612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:24.373 [2024-11-18 23:59:30.950624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.514 ms 00:15:24.373 [2024-11-18 23:59:30.950632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.974100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.974144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:24.373 [2024-11-18 23:59:30.974157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.431 ms 00:15:24.373 [2024-11-18 23:59:30.974164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.974198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.974206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:24.373 [2024-11-18 23:59:30.974219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:24.373 [2024-11-18 23:59:30.974228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.974321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.373 [2024-11-18 23:59:30.974331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:24.373 [2024-11-18 23:59:30.974341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:15:24.373 [2024-11-18 23:59:30.974348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.373 [2024-11-18 23:59:30.975959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2532.088 ms, result 0 00:15:24.373 { 00:15:24.373 "name": "ftl0", 00:15:24.373 "uuid": "97ea3909-2ed8-4931-b1e7-7c7da7c42f8b" 00:15:24.373 } 00:15:24.373 23:59:30 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:24.373 23:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:15:24.373 23:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.373 23:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:15:24.373 23:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.373 23:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.373 23:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:24.634 23:59:31 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:24.895 [ 00:15:24.895 { 00:15:24.895 "name": "ftl0", 00:15:24.895 "aliases": [ 00:15:24.895 "97ea3909-2ed8-4931-b1e7-7c7da7c42f8b" 00:15:24.895 ], 00:15:24.895 "product_name": "FTL disk", 00:15:24.895 "block_size": 4096, 00:15:24.895 "num_blocks": 20971520, 00:15:24.895 "uuid": "97ea3909-2ed8-4931-b1e7-7c7da7c42f8b", 00:15:24.895 "assigned_rate_limits": { 00:15:24.895 "rw_ios_per_sec": 0, 00:15:24.895 "rw_mbytes_per_sec": 0, 00:15:24.895 "r_mbytes_per_sec": 0, 00:15:24.895 "w_mbytes_per_sec": 0 00:15:24.895 }, 00:15:24.895 "claimed": false, 00:15:24.895 "zoned": false, 00:15:24.895 "supported_io_types": { 00:15:24.895 "read": true, 00:15:24.895 "write": true, 00:15:24.895 "unmap": true, 00:15:24.895 "flush": true, 00:15:24.895 "reset": false, 00:15:24.895 "nvme_admin": false, 00:15:24.895 "nvme_io": false, 00:15:24.895 "nvme_io_md": false, 00:15:24.895 "write_zeroes": true, 00:15:24.895 "zcopy": false, 00:15:24.895 "get_zone_info": false, 00:15:24.895 "zone_management": false, 00:15:24.895 "zone_append": false, 00:15:24.895 "compare": false, 00:15:24.895 "compare_and_write": false, 00:15:24.895 "abort": false, 00:15:24.895 "seek_hole": false, 00:15:24.895 "seek_data": false, 00:15:24.895 "copy": false, 00:15:24.895 "nvme_iov_md": false 00:15:24.895 }, 00:15:24.895 "driver_specific": { 00:15:24.895 "ftl": { 00:15:24.895 "base_bdev": "9c971600-509c-402c-8547-508c8735326a", 00:15:24.895 "cache": "nvc0n1p0" 00:15:24.895 } 00:15:24.895 } 00:15:24.895 } 00:15:24.895 ] 00:15:24.895 23:59:31 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:15:24.895 23:59:31 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:24.895 23:59:31 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:25.156 23:59:31 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:25.156 23:59:31 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:25.156 [2024-11-18 23:59:31.792309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.156 [2024-11-18 23:59:31.792349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:25.156 [2024-11-18 23:59:31.792361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:25.156 [2024-11-18 23:59:31.792371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.157 [2024-11-18 23:59:31.792404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:25.157 [2024-11-18 23:59:31.795198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.157 [2024-11-18 23:59:31.795226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:25.157 [2024-11-18 23:59:31.795239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:15:25.157 [2024-11-18 23:59:31.795248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.157 [2024-11-18 23:59:31.795727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.157 [2024-11-18 23:59:31.795861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:25.157 [2024-11-18 23:59:31.795880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:15:25.157 [2024-11-18 23:59:31.795888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.157 [2024-11-18 23:59:31.799146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.157 [2024-11-18 23:59:31.799170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:25.157 [2024-11-18 23:59:31.799182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:15:25.157 [2024-11-18 23:59:31.799192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.157 [2024-11-18 23:59:31.805606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.157 [2024-11-18 23:59:31.805634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:25.157 [2024-11-18 23:59:31.805647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.383 ms 00:15:25.157 [2024-11-18 23:59:31.805655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.157 [2024-11-18 23:59:31.828546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.157 [2024-11-18 23:59:31.828580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:25.157 [2024-11-18 23:59:31.828593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.802 ms 00:15:25.157 [2024-11-18 23:59:31.828601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.157 [2024-11-18 23:59:31.843779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.157 [2024-11-18 23:59:31.843812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:25.157 [2024-11-18 23:59:31.843826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.116 ms 00:15:25.157 [2024-11-18 23:59:31.843837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.157 [2024-11-18 23:59:31.844034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.157 [2024-11-18 23:59:31.844051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:25.157 [2024-11-18 23:59:31.844062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:15:25.157 [2024-11-18 23:59:31.844069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.419 [2024-11-18 23:59:31.867313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.419 [2024-11-18 23:59:31.867343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:25.419 [2024-11-18 23:59:31.867356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.214 ms 00:15:25.419 [2024-11-18 23:59:31.867362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.419 [2024-11-18 23:59:31.889815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.419 [2024-11-18 23:59:31.889843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:25.419 [2024-11-18 23:59:31.889855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.409 ms 00:15:25.419 [2024-11-18 23:59:31.889862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.419 [2024-11-18 23:59:31.911901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.419 [2024-11-18 23:59:31.911930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:25.419 [2024-11-18 23:59:31.911942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.984 ms 00:15:25.419 [2024-11-18 23:59:31.911948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.419 [2024-11-18 23:59:31.934402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.419 [2024-11-18 23:59:31.934430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:25.419 [2024-11-18 23:59:31.934443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.361 ms 00:15:25.419 [2024-11-18 23:59:31.934450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.419 [2024-11-18 23:59:31.934497] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:25.419 [2024-11-18 23:59:31.934510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:25.419 [2024-11-18 23:59:31.934665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.934995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:25.420 [2024-11-18 23:59:31.935417] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:25.420 [2024-11-18 23:59:31.935441] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97ea3909-2ed8-4931-b1e7-7c7da7c42f8b 00:15:25.420 [2024-11-18 23:59:31.935449] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:25.420 [2024-11-18 23:59:31.935461] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:25.420 [2024-11-18 23:59:31.935468] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:25.420 [2024-11-18 23:59:31.935480] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:25.420 [2024-11-18 23:59:31.935487] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:25.420 [2024-11-18 23:59:31.935496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:25.420 [2024-11-18 23:59:31.935504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:25.421 [2024-11-18 23:59:31.935512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:25.421 [2024-11-18 23:59:31.935518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:25.421 [2024-11-18 23:59:31.935527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.421 [2024-11-18 23:59:31.935534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:25.421 [2024-11-18 23:59:31.935545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:15:25.421 [2024-11-18 23:59:31.935552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.421 [2024-11-18 23:59:31.948330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.421 [2024-11-18 23:59:31.948360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:25.421 [2024-11-18 23:59:31.948372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.724 ms 00:15:25.421 [2024-11-18 23:59:31.948380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.421 [2024-11-18 23:59:31.948734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.421 [2024-11-18 23:59:31.948744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:25.421 [2024-11-18 23:59:31.948754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:15:25.421 [2024-11-18 23:59:31.948761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.421 [2024-11-18 23:59:31.995286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.421 [2024-11-18 23:59:31.995321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:25.421 [2024-11-18 23:59:31.995333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.421 [2024-11-18 23:59:31.995342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.421 [2024-11-18 23:59:31.995405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.421 [2024-11-18 23:59:31.995413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:25.421 [2024-11-18 23:59:31.995423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.421 [2024-11-18 23:59:31.995445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.421 [2024-11-18 23:59:31.995536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.421 [2024-11-18 23:59:31.995547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:25.421 [2024-11-18 23:59:31.995560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.421 [2024-11-18 23:59:31.995567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.421 [2024-11-18 23:59:31.995596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.421 [2024-11-18 23:59:31.995604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:25.421 [2024-11-18 23:59:31.995613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.421 [2024-11-18 23:59:31.995621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.421 [2024-11-18 23:59:32.081982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.421 [2024-11-18 23:59:32.082192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:25.421 [2024-11-18 23:59:32.082214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.421 [2024-11-18 23:59:32.082223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.682 [2024-11-18 23:59:32.145564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.682 [2024-11-18 23:59:32.145711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:25.682 [2024-11-18 23:59:32.145733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.682 [2024-11-18 23:59:32.145741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.682 [2024-11-18 23:59:32.145845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.682 [2024-11-18 23:59:32.145856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:25.682 [2024-11-18 23:59:32.145866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.682 [2024-11-18 23:59:32.145876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.682 [2024-11-18 23:59:32.145937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.682 [2024-11-18 23:59:32.145946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:25.682 [2024-11-18 23:59:32.145955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.682 [2024-11-18 23:59:32.145962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.682 [2024-11-18 23:59:32.146066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.682 [2024-11-18 23:59:32.146076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:25.682 [2024-11-18 23:59:32.146086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.682 [2024-11-18 23:59:32.146093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.682 [2024-11-18 23:59:32.146163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.682 [2024-11-18 23:59:32.146172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:25.682 [2024-11-18 23:59:32.146182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.682 [2024-11-18 23:59:32.146189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.682 [2024-11-18 23:59:32.146231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.682 [2024-11-18 23:59:32.146239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:25.682 [2024-11-18 23:59:32.146248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.682 [2024-11-18 23:59:32.146255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.682 [2024-11-18 23:59:32.146307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:25.682 [2024-11-18 23:59:32.146317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:25.682 [2024-11-18 23:59:32.146326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:25.682 [2024-11-18 23:59:32.146333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.682 [2024-11-18 23:59:32.146480] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.154 ms, result 0 00:15:25.682 true 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72353 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 72353 ']' 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 72353 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72353 00:15:25.682 killing process with pid 72353 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72353' 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 72353 00:15:25.682 23:59:32 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 72353 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.814 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:33.815 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:33.815 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:33.815 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:33.815 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:33.815 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:33.815 23:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:33.815 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:33.815 fio-3.35 00:15:33.815 Starting 1 thread 00:15:39.100 00:15:39.100 test: (groupid=0, jobs=1): err= 0: pid=72532: Mon Nov 18 23:59:44 2024 00:15:39.100 read: IOPS=1151, BW=76.4MiB/s (80.1MB/s)(255MiB/3330msec) 00:15:39.100 slat (nsec): min=2846, max=32529, avg=4345.32, stdev=2131.89 00:15:39.100 clat (usec): min=237, max=1493, avg=393.19, stdev=162.43 00:15:39.100 lat (usec): min=241, max=1502, avg=397.53, stdev=163.12 00:15:39.100 clat percentiles (usec): 00:15:39.100 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 285], 20.00th=[ 293], 00:15:39.100 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 318], 60.00th=[ 326], 00:15:39.100 | 70.00th=[ 379], 80.00th=[ 469], 90.00th=[ 627], 95.00th=[ 840], 00:15:39.100 | 99.00th=[ 914], 99.50th=[ 955], 99.90th=[ 1090], 99.95th=[ 1287], 00:15:39.100 | 99.99th=[ 1500] 00:15:39.100 write: IOPS=1159, BW=77.0MiB/s (80.7MB/s)(256MiB/3327msec); 0 zone resets 00:15:39.100 slat (nsec): min=13369, max=65807, avg=18620.84, stdev=3618.27 00:15:39.100 clat (usec): min=278, max=2907, avg=436.53, stdev=211.78 00:15:39.100 lat (usec): min=301, max=2932, avg=455.15, stdev=212.99 00:15:39.100 clat percentiles (usec): 00:15:39.100 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:15:39.100 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:15:39.100 | 70.00th=[ 404], 80.00th=[ 537], 90.00th=[ 709], 95.00th=[ 914], 00:15:39.100 | 99.00th=[ 1205], 99.50th=[ 1516], 99.90th=[ 1778], 99.95th=[ 2114], 00:15:39.100 | 99.99th=[ 2900] 00:15:39.100 bw ( KiB/s): min=48416, max=93568, per=97.81%, avg=77089.33, stdev=18722.69, samples=6 00:15:39.100 iops : min= 712, max= 1376, avg=1133.67, stdev=275.33, samples=6 00:15:39.100 lat (usec) : 250=0.08%, 500=79.61%, 750=11.63%, 1000=7.48% 00:15:39.101 lat (msec) : 2=1.17%, 4=0.04% 00:15:39.101 cpu : usr=99.34%, sys=0.00%, ctx=6, majf=0, minf=1170 00:15:39.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.101 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.101 00:15:39.101 Run status group 0 (all jobs): 00:15:39.101 READ: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=255MiB (267MB), run=3330-3330msec 00:15:39.101 WRITE: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=256MiB (269MB), run=3327-3327msec 00:15:39.673 ----------------------------------------------------- 00:15:39.673 Suppressions used: 00:15:39.673 count bytes template 00:15:39.673 1 5 /usr/src/fio/parse.c 00:15:39.673 1 8 libtcmalloc_minimal.so 00:15:39.673 1 904 libcrypto.so 00:15:39.673 ----------------------------------------------------- 00:15:39.673 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:39.673 23:59:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:39.933 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:39.933 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:39.933 fio-3.35 00:15:39.933 Starting 2 threads 00:16:06.508 00:16:06.508 first_half: (groupid=0, jobs=1): err= 0: pid=72633: Tue Nov 19 00:00:11 2024 00:16:06.508 read: IOPS=2705, BW=10.6MiB/s (11.1MB/s)(255MiB/24112msec) 00:16:06.508 slat (usec): min=3, max=528, avg= 4.40, stdev= 2.67 00:16:06.508 clat (usec): min=650, max=302869, avg=35408.98, stdev=19986.72 00:16:06.508 lat (usec): min=654, max=302874, avg=35413.38, stdev=19986.83 00:16:06.508 clat percentiles (msec): 00:16:06.508 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 30], 20.00th=[ 30], 00:16:06.508 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:16:06.508 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 50], 00:16:06.508 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 271], 99.95th=[ 292], 00:16:06.508 | 99.99th=[ 296] 00:16:06.508 write: IOPS=3233, BW=12.6MiB/s (13.2MB/s)(256MiB/20267msec); 0 zone resets 00:16:06.508 slat (usec): min=3, max=773, avg= 5.81, stdev= 4.79 00:16:06.508 clat (usec): min=346, max=117178, avg=11825.25, stdev=19695.33 00:16:06.508 lat (usec): min=352, max=117185, avg=11831.07, stdev=19695.38 00:16:06.508 clat percentiles (usec): 00:16:06.508 | 1.00th=[ 652], 5.00th=[ 775], 10.00th=[ 955], 20.00th=[ 1516], 00:16:06.508 | 30.00th=[ 2966], 40.00th=[ 3916], 50.00th=[ 4621], 60.00th=[ 5538], 00:16:06.508 | 70.00th=[ 8455], 80.00th=[ 11994], 90.00th=[ 29230], 95.00th=[ 63701], 00:16:06.508 | 99.00th=[ 95945], 99.50th=[102237], 99.90th=[112722], 99.95th=[114820], 00:16:06.508 | 99.99th=[115868] 00:16:06.508 bw ( KiB/s): min= 920, max=42048, per=84.44%, avg=21845.33, stdev=10082.08, samples=24 00:16:06.508 iops : min= 230, max=10512, avg=5461.33, stdev=2520.52, samples=24 00:16:06.508 lat (usec) : 500=0.02%, 750=2.06%, 1000=3.72% 00:16:06.508 lat (msec) : 2=5.79%, 4=9.59%, 10=17.69%, 20=6.58%, 50=47.80% 00:16:06.508 lat (msec) : 100=5.43%, 250=1.24%, 500=0.07% 00:16:06.508 cpu : usr=98.75%, sys=0.32%, ctx=198, majf=0, minf=5564 00:16:06.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:06.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.508 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.508 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.508 second_half: (groupid=0, jobs=1): err= 0: pid=72634: Tue Nov 19 00:00:11 2024 00:16:06.508 read: IOPS=2724, BW=10.6MiB/s (11.2MB/s)(254MiB/23910msec) 00:16:06.508 slat (usec): min=2, max=228, avg= 4.87, stdev= 1.68 00:16:06.508 clat (usec): min=625, max=301709, avg=36318.15, stdev=19391.66 00:16:06.508 lat (usec): min=630, max=301715, avg=36323.02, stdev=19391.72 00:16:06.508 clat percentiles (msec): 00:16:06.508 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:16:06.508 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:16:06.508 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 52], 00:16:06.508 | 99.00th=[ 131], 99.50th=[ 159], 99.90th=[ 255], 99.95th=[ 284], 00:16:06.508 | 99.99th=[ 296] 00:16:06.508 write: IOPS=4442, BW=17.4MiB/s (18.2MB/s)(256MiB/14753msec); 0 zone resets 00:16:06.508 slat (usec): min=3, max=1809, avg= 6.62, stdev= 9.71 00:16:06.508 clat (usec): min=387, max=117823, avg=10579.99, stdev=19206.99 00:16:06.508 lat (usec): min=399, max=117829, avg=10586.61, stdev=19207.11 00:16:06.508 clat percentiles (usec): 00:16:06.508 | 1.00th=[ 701], 5.00th=[ 889], 10.00th=[ 1004], 20.00th=[ 1205], 00:16:06.508 | 30.00th=[ 1844], 40.00th=[ 2769], 50.00th=[ 4047], 60.00th=[ 5145], 00:16:06.508 | 70.00th=[ 6128], 80.00th=[ 10814], 90.00th=[ 22414], 95.00th=[ 62653], 00:16:06.508 | 99.00th=[ 93848], 99.50th=[103285], 99.90th=[113771], 99.95th=[115868], 00:16:06.508 | 99.99th=[116917] 00:16:06.508 bw ( KiB/s): min= 224, max=40768, per=100.00%, avg=26214.40, stdev=12759.80, samples=20 00:16:06.508 iops : min= 56, max=10192, avg=6553.60, stdev=3189.95, samples=20 00:16:06.508 lat (usec) : 500=0.01%, 750=0.89%, 1000=4.14% 00:16:06.508 lat (msec) : 2=10.93%, 4=9.38%, 10=14.06%, 20=5.92%, 50=47.79% 00:16:06.508 lat (msec) : 100=5.45%, 250=1.36%, 500=0.06% 00:16:06.508 cpu : usr=99.18%, sys=0.18%, ctx=35, majf=0, minf=5541 00:16:06.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:06.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.508 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.508 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.508 00:16:06.508 Run status group 0 (all jobs): 00:16:06.508 READ: bw=21.1MiB/s (22.1MB/s), 10.6MiB/s-10.6MiB/s (11.1MB/s-11.2MB/s), io=509MiB (534MB), run=23910-24112msec 00:16:06.508 WRITE: bw=25.3MiB/s (26.5MB/s), 12.6MiB/s-17.4MiB/s (13.2MB/s-18.2MB/s), io=512MiB (537MB), run=14753-20267msec 00:16:07.082 ----------------------------------------------------- 00:16:07.082 Suppressions used: 00:16:07.082 count bytes template 00:16:07.082 2 10 /usr/src/fio/parse.c 00:16:07.082 2 192 /usr/src/fio/iolog.c 00:16:07.082 1 8 libtcmalloc_minimal.so 00:16:07.082 1 904 libcrypto.so 00:16:07.082 ----------------------------------------------------- 00:16:07.082 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:07.082 00:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:07.343 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:07.343 fio-3.35 00:16:07.343 Starting 1 thread 00:16:25.457 00:16:25.457 test: (groupid=0, jobs=1): err= 0: pid=72952: Tue Nov 19 00:00:31 2024 00:16:25.457 read: IOPS=7376, BW=28.8MiB/s (30.2MB/s)(255MiB/8839msec) 00:16:25.457 slat (nsec): min=2988, max=57712, avg=4753.25, stdev=1361.51 00:16:25.457 clat (usec): min=914, max=40052, avg=17343.31, stdev=2879.03 00:16:25.457 lat (usec): min=920, max=40060, avg=17348.07, stdev=2879.31 00:16:25.457 clat percentiles (usec): 00:16:25.457 | 1.00th=[14222], 5.00th=[15008], 10.00th=[15270], 20.00th=[15664], 00:16:25.457 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16712], 00:16:25.457 | 70.00th=[17433], 80.00th=[18744], 90.00th=[20841], 95.00th=[22938], 00:16:25.457 | 99.00th=[29230], 99.50th=[31327], 99.90th=[36439], 99.95th=[37487], 00:16:25.457 | 99.99th=[39060] 00:16:25.457 write: IOPS=8649, BW=33.8MiB/s (35.4MB/s)(256MiB/7577msec); 0 zone resets 00:16:25.457 slat (usec): min=4, max=1217, avg= 9.19, stdev= 9.54 00:16:25.457 clat (usec): min=509, max=87328, avg=14726.95, stdev=17675.01 00:16:25.457 lat (usec): min=521, max=87338, avg=14736.14, stdev=17675.20 00:16:25.457 clat percentiles (usec): 00:16:25.457 | 1.00th=[ 1205], 5.00th=[ 1582], 10.00th=[ 1811], 20.00th=[ 2180], 00:16:25.457 | 30.00th=[ 2573], 40.00th=[ 3621], 50.00th=[ 9110], 60.00th=[11338], 00:16:25.457 | 70.00th=[14353], 80.00th=[17695], 90.00th=[52167], 95.00th=[56361], 00:16:25.457 | 99.00th=[61080], 99.50th=[62653], 99.90th=[66847], 99.95th=[71828], 00:16:25.457 | 99.99th=[84411] 00:16:25.457 bw ( KiB/s): min= 3056, max=50336, per=94.70%, avg=32764.25, stdev=10307.03, samples=16 00:16:25.457 iops : min= 764, max=12584, avg=8191.06, stdev=2576.76, samples=16 00:16:25.457 lat (usec) : 750=0.02%, 1000=0.14% 00:16:25.457 lat (msec) : 2=7.44%, 4=12.88%, 10=6.57%, 20=57.71%, 50=9.45% 00:16:25.457 lat (msec) : 100=5.78% 00:16:25.457 cpu : usr=98.82%, sys=0.29%, ctx=27, majf=0, minf=5565 00:16:25.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:25.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.457 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.457 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.457 00:16:25.457 Run status group 0 (all jobs): 00:16:25.457 READ: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=255MiB (267MB), run=8839-8839msec 00:16:25.457 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=256MiB (268MB), run=7577-7577msec 00:16:26.403 ----------------------------------------------------- 00:16:26.403 Suppressions used: 00:16:26.403 count bytes template 00:16:26.403 1 5 /usr/src/fio/parse.c 00:16:26.403 2 192 /usr/src/fio/iolog.c 00:16:26.403 1 8 libtcmalloc_minimal.so 00:16:26.403 1 904 libcrypto.so 00:16:26.403 ----------------------------------------------------- 00:16:26.403 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:26.665 Remove shared memory files 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57173 /dev/shm/spdk_tgt_trace.pid71268 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:26.665 ************************************ 00:16:26.665 END TEST ftl_fio_basic 00:16:26.665 ************************************ 00:16:26.665 00:16:26.665 real 1m8.400s 00:16:26.665 user 2m30.991s 00:16:26.665 sys 0m3.087s 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.665 00:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:26.665 00:00:33 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:26.665 00:00:33 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:26.665 00:00:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.665 00:00:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:26.665 ************************************ 00:16:26.665 START TEST ftl_bdevperf 00:16:26.665 ************************************ 00:16:26.665 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:26.665 * Looking for test storage... 00:16:26.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:26.665 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:26.665 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:26.665 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:26.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.927 --rc genhtml_branch_coverage=1 00:16:26.927 --rc genhtml_function_coverage=1 00:16:26.927 --rc genhtml_legend=1 00:16:26.927 --rc geninfo_all_blocks=1 00:16:26.927 --rc geninfo_unexecuted_blocks=1 00:16:26.927 00:16:26.927 ' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:26.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.927 --rc genhtml_branch_coverage=1 00:16:26.927 --rc genhtml_function_coverage=1 00:16:26.927 --rc genhtml_legend=1 00:16:26.927 --rc geninfo_all_blocks=1 00:16:26.927 --rc geninfo_unexecuted_blocks=1 00:16:26.927 00:16:26.927 ' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:26.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.927 --rc genhtml_branch_coverage=1 00:16:26.927 --rc genhtml_function_coverage=1 00:16:26.927 --rc genhtml_legend=1 00:16:26.927 --rc geninfo_all_blocks=1 00:16:26.927 --rc geninfo_unexecuted_blocks=1 00:16:26.927 00:16:26.927 ' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:26.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.927 --rc genhtml_branch_coverage=1 00:16:26.927 --rc genhtml_function_coverage=1 00:16:26.927 --rc genhtml_legend=1 00:16:26.927 --rc geninfo_all_blocks=1 00:16:26.927 --rc geninfo_unexecuted_blocks=1 00:16:26.927 00:16:26.927 ' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:26.927 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73225 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73225 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 73225 ']' 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.928 00:00:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:26.928 [2024-11-19 00:00:33.499367] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:26.928 [2024-11-19 00:00:33.500247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73225 ] 00:16:27.189 [2024-11-19 00:00:33.663199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.189 [2024-11-19 00:00:33.804778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.762 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.762 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:16:27.762 00:00:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:27.762 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:27.762 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:27.762 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:27.762 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:27.762 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:28.024 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:28.024 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:28.024 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:28.024 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:28.024 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:28.024 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:28.024 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:28.024 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:28.286 { 00:16:28.286 "name": "nvme0n1", 00:16:28.286 "aliases": [ 00:16:28.286 "04db1c26-075f-4a60-9b72-0670f07e0317" 00:16:28.286 ], 00:16:28.286 "product_name": "NVMe disk", 00:16:28.286 "block_size": 4096, 00:16:28.286 "num_blocks": 1310720, 00:16:28.286 "uuid": "04db1c26-075f-4a60-9b72-0670f07e0317", 00:16:28.286 "numa_id": -1, 00:16:28.286 "assigned_rate_limits": { 00:16:28.286 "rw_ios_per_sec": 0, 00:16:28.286 "rw_mbytes_per_sec": 0, 00:16:28.286 "r_mbytes_per_sec": 0, 00:16:28.286 "w_mbytes_per_sec": 0 00:16:28.286 }, 00:16:28.286 "claimed": true, 00:16:28.286 "claim_type": "read_many_write_one", 00:16:28.286 "zoned": false, 00:16:28.286 "supported_io_types": { 00:16:28.286 "read": true, 00:16:28.286 "write": true, 00:16:28.286 "unmap": true, 00:16:28.286 "flush": true, 00:16:28.286 "reset": true, 00:16:28.286 "nvme_admin": true, 00:16:28.286 "nvme_io": true, 00:16:28.286 "nvme_io_md": false, 00:16:28.286 "write_zeroes": true, 00:16:28.286 "zcopy": false, 00:16:28.286 "get_zone_info": false, 00:16:28.286 "zone_management": false, 00:16:28.286 "zone_append": false, 00:16:28.286 "compare": true, 00:16:28.286 "compare_and_write": false, 00:16:28.286 "abort": true, 00:16:28.286 "seek_hole": false, 00:16:28.286 "seek_data": false, 00:16:28.286 "copy": true, 00:16:28.286 "nvme_iov_md": false 00:16:28.286 }, 00:16:28.286 "driver_specific": { 00:16:28.286 "nvme": [ 00:16:28.286 { 00:16:28.286 "pci_address": "0000:00:11.0", 00:16:28.286 "trid": { 00:16:28.286 "trtype": "PCIe", 00:16:28.286 "traddr": "0000:00:11.0" 00:16:28.286 }, 00:16:28.286 "ctrlr_data": { 00:16:28.286 "cntlid": 0, 00:16:28.286 "vendor_id": "0x1b36", 00:16:28.286 "model_number": "QEMU NVMe Ctrl", 00:16:28.286 "serial_number": "12341", 00:16:28.286 "firmware_revision": "8.0.0", 00:16:28.286 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:28.286 "oacs": { 00:16:28.286 "security": 0, 00:16:28.286 "format": 1, 00:16:28.286 "firmware": 0, 00:16:28.286 "ns_manage": 1 00:16:28.286 }, 00:16:28.286 "multi_ctrlr": false, 00:16:28.286 "ana_reporting": false 00:16:28.286 }, 00:16:28.286 "vs": { 00:16:28.286 "nvme_version": "1.4" 00:16:28.286 }, 00:16:28.286 "ns_data": { 00:16:28.286 "id": 1, 00:16:28.286 "can_share": false 00:16:28.286 } 00:16:28.286 } 00:16:28.286 ], 00:16:28.286 "mp_policy": "active_passive" 00:16:28.286 } 00:16:28.286 } 00:16:28.286 ]' 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:28.286 00:00:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:28.548 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e7be64bb-6b0f-4758-872d-cb2e1b5fca2b 00:16:28.548 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:28.548 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7be64bb-6b0f-4758-872d-cb2e1b5fca2b 00:16:28.809 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:29.070 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=94d5cb87-2707-49d5-b3c3-0045a9b91b2c 00:16:29.070 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 94d5cb87-2707-49d5-b3c3-0045a9b91b2c 00:16:29.331 00:00:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:29.331 00:00:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:29.331 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:29.331 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:29.331 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:29.331 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:29.332 00:00:35 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:29.332 00:00:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:29.332 00:00:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:29.332 00:00:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:29.332 00:00:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:29.332 00:00:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:29.593 { 00:16:29.593 "name": "4cb686f2-e045-4fda-91dd-eb3c1a25f826", 00:16:29.593 "aliases": [ 00:16:29.593 "lvs/nvme0n1p0" 00:16:29.593 ], 00:16:29.593 "product_name": "Logical Volume", 00:16:29.593 "block_size": 4096, 00:16:29.593 "num_blocks": 26476544, 00:16:29.593 "uuid": "4cb686f2-e045-4fda-91dd-eb3c1a25f826", 00:16:29.593 "assigned_rate_limits": { 00:16:29.593 "rw_ios_per_sec": 0, 00:16:29.593 "rw_mbytes_per_sec": 0, 00:16:29.593 "r_mbytes_per_sec": 0, 00:16:29.593 "w_mbytes_per_sec": 0 00:16:29.593 }, 00:16:29.593 "claimed": false, 00:16:29.593 "zoned": false, 00:16:29.593 "supported_io_types": { 00:16:29.593 "read": true, 00:16:29.593 "write": true, 00:16:29.593 "unmap": true, 00:16:29.593 "flush": false, 00:16:29.593 "reset": true, 00:16:29.593 "nvme_admin": false, 00:16:29.593 "nvme_io": false, 00:16:29.593 "nvme_io_md": false, 00:16:29.593 "write_zeroes": true, 00:16:29.593 "zcopy": false, 00:16:29.593 "get_zone_info": false, 00:16:29.593 "zone_management": false, 00:16:29.593 "zone_append": false, 00:16:29.593 "compare": false, 00:16:29.593 "compare_and_write": false, 00:16:29.593 "abort": false, 00:16:29.593 "seek_hole": true, 00:16:29.593 "seek_data": true, 00:16:29.593 "copy": false, 00:16:29.593 "nvme_iov_md": false 00:16:29.593 }, 00:16:29.593 "driver_specific": { 00:16:29.593 "lvol": { 00:16:29.593 "lvol_store_uuid": "94d5cb87-2707-49d5-b3c3-0045a9b91b2c", 00:16:29.593 "base_bdev": "nvme0n1", 00:16:29.593 "thin_provision": true, 00:16:29.593 "num_allocated_clusters": 0, 00:16:29.593 "snapshot": false, 00:16:29.593 "clone": false, 00:16:29.593 "esnap_clone": false 00:16:29.593 } 00:16:29.593 } 00:16:29.593 } 00:16:29.593 ]' 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:29.593 00:00:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:29.852 00:00:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:29.852 00:00:36 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:29.852 00:00:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:29.852 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:29.852 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:29.852 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:29.852 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:29.852 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:30.111 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:30.111 { 00:16:30.111 "name": "4cb686f2-e045-4fda-91dd-eb3c1a25f826", 00:16:30.111 "aliases": [ 00:16:30.111 "lvs/nvme0n1p0" 00:16:30.111 ], 00:16:30.111 "product_name": "Logical Volume", 00:16:30.111 "block_size": 4096, 00:16:30.111 "num_blocks": 26476544, 00:16:30.111 "uuid": "4cb686f2-e045-4fda-91dd-eb3c1a25f826", 00:16:30.111 "assigned_rate_limits": { 00:16:30.111 "rw_ios_per_sec": 0, 00:16:30.111 "rw_mbytes_per_sec": 0, 00:16:30.111 "r_mbytes_per_sec": 0, 00:16:30.111 "w_mbytes_per_sec": 0 00:16:30.111 }, 00:16:30.111 "claimed": false, 00:16:30.111 "zoned": false, 00:16:30.111 "supported_io_types": { 00:16:30.111 "read": true, 00:16:30.111 "write": true, 00:16:30.111 "unmap": true, 00:16:30.111 "flush": false, 00:16:30.111 "reset": true, 00:16:30.111 "nvme_admin": false, 00:16:30.111 "nvme_io": false, 00:16:30.111 "nvme_io_md": false, 00:16:30.111 "write_zeroes": true, 00:16:30.111 "zcopy": false, 00:16:30.111 "get_zone_info": false, 00:16:30.111 "zone_management": false, 00:16:30.111 "zone_append": false, 00:16:30.111 "compare": false, 00:16:30.111 "compare_and_write": false, 00:16:30.111 "abort": false, 00:16:30.111 "seek_hole": true, 00:16:30.111 "seek_data": true, 00:16:30.111 "copy": false, 00:16:30.111 "nvme_iov_md": false 00:16:30.111 }, 00:16:30.111 "driver_specific": { 00:16:30.111 "lvol": { 00:16:30.111 "lvol_store_uuid": "94d5cb87-2707-49d5-b3c3-0045a9b91b2c", 00:16:30.111 "base_bdev": "nvme0n1", 00:16:30.111 "thin_provision": true, 00:16:30.111 "num_allocated_clusters": 0, 00:16:30.111 "snapshot": false, 00:16:30.111 "clone": false, 00:16:30.111 "esnap_clone": false 00:16:30.111 } 00:16:30.111 } 00:16:30.111 } 00:16:30.111 ]' 00:16:30.111 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:30.111 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:30.111 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:30.111 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:30.111 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:30.111 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:30.111 00:00:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:30.111 00:00:36 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:30.369 00:00:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:30.369 00:00:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:30.369 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:30.369 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:30.369 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:30.369 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:30.369 00:00:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4cb686f2-e045-4fda-91dd-eb3c1a25f826 00:16:30.628 00:00:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:30.628 { 00:16:30.628 "name": "4cb686f2-e045-4fda-91dd-eb3c1a25f826", 00:16:30.628 "aliases": [ 00:16:30.628 "lvs/nvme0n1p0" 00:16:30.628 ], 00:16:30.628 "product_name": "Logical Volume", 00:16:30.628 "block_size": 4096, 00:16:30.628 "num_blocks": 26476544, 00:16:30.628 "uuid": "4cb686f2-e045-4fda-91dd-eb3c1a25f826", 00:16:30.628 "assigned_rate_limits": { 00:16:30.628 "rw_ios_per_sec": 0, 00:16:30.628 "rw_mbytes_per_sec": 0, 00:16:30.628 "r_mbytes_per_sec": 0, 00:16:30.628 "w_mbytes_per_sec": 0 00:16:30.628 }, 00:16:30.628 "claimed": false, 00:16:30.628 "zoned": false, 00:16:30.628 "supported_io_types": { 00:16:30.628 "read": true, 00:16:30.628 "write": true, 00:16:30.628 "unmap": true, 00:16:30.628 "flush": false, 00:16:30.628 "reset": true, 00:16:30.628 "nvme_admin": false, 00:16:30.628 "nvme_io": false, 00:16:30.628 "nvme_io_md": false, 00:16:30.628 "write_zeroes": true, 00:16:30.628 "zcopy": false, 00:16:30.628 "get_zone_info": false, 00:16:30.628 "zone_management": false, 00:16:30.628 "zone_append": false, 00:16:30.628 "compare": false, 00:16:30.628 "compare_and_write": false, 00:16:30.628 "abort": false, 00:16:30.628 "seek_hole": true, 00:16:30.628 "seek_data": true, 00:16:30.628 "copy": false, 00:16:30.628 "nvme_iov_md": false 00:16:30.628 }, 00:16:30.628 "driver_specific": { 00:16:30.628 "lvol": { 00:16:30.628 "lvol_store_uuid": "94d5cb87-2707-49d5-b3c3-0045a9b91b2c", 00:16:30.628 "base_bdev": "nvme0n1", 00:16:30.628 "thin_provision": true, 00:16:30.628 "num_allocated_clusters": 0, 00:16:30.628 "snapshot": false, 00:16:30.628 "clone": false, 00:16:30.628 "esnap_clone": false 00:16:30.628 } 00:16:30.628 } 00:16:30.628 } 00:16:30.628 ]' 00:16:30.628 00:00:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:30.628 00:00:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:30.628 00:00:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:30.628 00:00:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:30.628 00:00:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:30.628 00:00:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:30.628 00:00:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:30.628 00:00:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4cb686f2-e045-4fda-91dd-eb3c1a25f826 -c nvc0n1p0 --l2p_dram_limit 20 00:16:30.888 [2024-11-19 00:00:37.336769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.888 [2024-11-19 00:00:37.336916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:30.888 [2024-11-19 00:00:37.336933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:30.888 [2024-11-19 00:00:37.336943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.888 [2024-11-19 00:00:37.336985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.888 [2024-11-19 00:00:37.336996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:30.888 [2024-11-19 00:00:37.337003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:30.888 [2024-11-19 00:00:37.337010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.888 [2024-11-19 00:00:37.337024] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:30.888 [2024-11-19 00:00:37.337599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:30.888 [2024-11-19 00:00:37.337626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.888 [2024-11-19 00:00:37.337646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:30.888 [2024-11-19 00:00:37.337656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:16:30.888 [2024-11-19 00:00:37.337672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.888 [2024-11-19 00:00:37.337711] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4dd6fa88-b4c6-4614-84f8-aad0d21f609b 00:16:30.888 [2024-11-19 00:00:37.339844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.889 [2024-11-19 00:00:37.339883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:30.889 [2024-11-19 00:00:37.339895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:30.889 [2024-11-19 00:00:37.339904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.889 [2024-11-19 00:00:37.346907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.889 [2024-11-19 00:00:37.346998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:30.889 [2024-11-19 00:00:37.347042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.938 ms 00:16:30.889 [2024-11-19 00:00:37.347060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.889 [2024-11-19 00:00:37.347152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.889 [2024-11-19 00:00:37.347172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:30.889 [2024-11-19 00:00:37.347193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:30.889 [2024-11-19 00:00:37.347208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.889 [2024-11-19 00:00:37.347263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.889 [2024-11-19 00:00:37.347283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:30.889 [2024-11-19 00:00:37.347303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:30.889 [2024-11-19 00:00:37.347372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.889 [2024-11-19 00:00:37.347395] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:30.889 [2024-11-19 00:00:37.350652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.889 [2024-11-19 00:00:37.350739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:30.889 [2024-11-19 00:00:37.350750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:16:30.889 [2024-11-19 00:00:37.350760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.889 [2024-11-19 00:00:37.350787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.889 [2024-11-19 00:00:37.350796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:30.889 [2024-11-19 00:00:37.350802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:30.889 [2024-11-19 00:00:37.350809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.889 [2024-11-19 00:00:37.350820] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:30.889 [2024-11-19 00:00:37.350934] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:30.889 [2024-11-19 00:00:37.350945] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:30.889 [2024-11-19 00:00:37.350956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:30.889 [2024-11-19 00:00:37.350964] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:30.889 [2024-11-19 00:00:37.350973] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:30.889 [2024-11-19 00:00:37.350979] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:30.889 [2024-11-19 00:00:37.350988] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:30.889 [2024-11-19 00:00:37.350994] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:30.889 [2024-11-19 00:00:37.351001] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:30.889 [2024-11-19 00:00:37.351007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.889 [2024-11-19 00:00:37.351016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:30.889 [2024-11-19 00:00:37.351023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:16:30.889 [2024-11-19 00:00:37.351030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.889 [2024-11-19 00:00:37.351092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.889 [2024-11-19 00:00:37.351101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:30.889 [2024-11-19 00:00:37.351107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:30.889 [2024-11-19 00:00:37.351116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.889 [2024-11-19 00:00:37.351212] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:30.889 [2024-11-19 00:00:37.351224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:30.889 [2024-11-19 00:00:37.351234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:30.889 [2024-11-19 00:00:37.351242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.889 [2024-11-19 00:00:37.351248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:30.889 [2024-11-19 00:00:37.351255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:30.889 [2024-11-19 00:00:37.351261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:30.889 [2024-11-19 00:00:37.351268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:30.889 [2024-11-19 00:00:37.351273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:30.889 [2024-11-19 00:00:37.351280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:30.889 [2024-11-19 00:00:37.351285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:30.889 [2024-11-19 00:00:37.351292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:30.889 [2024-11-19 00:00:37.351298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:30.889 [2024-11-19 00:00:37.351310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:30.889 [2024-11-19 00:00:37.351315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:30.889 [2024-11-19 00:00:37.351324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.889 [2024-11-19 00:00:37.351329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:30.889 [2024-11-19 00:00:37.351336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:30.889 [2024-11-19 00:00:37.351342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.889 [2024-11-19 00:00:37.351350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:30.889 [2024-11-19 00:00:37.351355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:30.889 [2024-11-19 00:00:37.351362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:30.889 [2024-11-19 00:00:37.351368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:30.889 [2024-11-19 00:00:37.351374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:30.889 [2024-11-19 00:00:37.351381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:30.889 [2024-11-19 00:00:37.351388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:30.889 [2024-11-19 00:00:37.351394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:30.889 [2024-11-19 00:00:37.351401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:30.889 [2024-11-19 00:00:37.351406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:30.889 [2024-11-19 00:00:37.351413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:30.889 [2024-11-19 00:00:37.351418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:30.890 [2024-11-19 00:00:37.351427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:30.890 [2024-11-19 00:00:37.351450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:30.890 [2024-11-19 00:00:37.351457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:30.890 [2024-11-19 00:00:37.351463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:30.890 [2024-11-19 00:00:37.351470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:30.890 [2024-11-19 00:00:37.351475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:30.890 [2024-11-19 00:00:37.351482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:30.890 [2024-11-19 00:00:37.351487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:30.890 [2024-11-19 00:00:37.351494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.890 [2024-11-19 00:00:37.351499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:30.890 [2024-11-19 00:00:37.351506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:30.890 [2024-11-19 00:00:37.351510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.890 [2024-11-19 00:00:37.351517] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:30.890 [2024-11-19 00:00:37.351523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:30.890 [2024-11-19 00:00:37.351531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:30.890 [2024-11-19 00:00:37.351537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.890 [2024-11-19 00:00:37.351548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:30.890 [2024-11-19 00:00:37.351553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:30.890 [2024-11-19 00:00:37.351560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:30.890 [2024-11-19 00:00:37.351566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:30.890 [2024-11-19 00:00:37.351573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:30.890 [2024-11-19 00:00:37.351578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:30.890 [2024-11-19 00:00:37.351588] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:30.890 [2024-11-19 00:00:37.351595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:30.890 [2024-11-19 00:00:37.351603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:30.890 [2024-11-19 00:00:37.351614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:30.890 [2024-11-19 00:00:37.351621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:30.890 [2024-11-19 00:00:37.351626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:30.890 [2024-11-19 00:00:37.351634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:30.890 [2024-11-19 00:00:37.351639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:30.890 [2024-11-19 00:00:37.351646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:30.890 [2024-11-19 00:00:37.351651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:30.890 [2024-11-19 00:00:37.351660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:30.890 [2024-11-19 00:00:37.351666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:30.890 [2024-11-19 00:00:37.351673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:30.890 [2024-11-19 00:00:37.351678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:30.890 [2024-11-19 00:00:37.351686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:30.890 [2024-11-19 00:00:37.351692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:30.890 [2024-11-19 00:00:37.351699] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:30.890 [2024-11-19 00:00:37.351706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:30.890 [2024-11-19 00:00:37.351714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:30.890 [2024-11-19 00:00:37.351720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:30.890 [2024-11-19 00:00:37.351728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:30.890 [2024-11-19 00:00:37.351733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:30.890 [2024-11-19 00:00:37.351741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.890 [2024-11-19 00:00:37.351749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:30.890 [2024-11-19 00:00:37.351756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:16:30.890 [2024-11-19 00:00:37.351762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.890 [2024-11-19 00:00:37.351790] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:30.890 [2024-11-19 00:00:37.351798] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:37.472 [2024-11-19 00:00:44.016246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.472 [2024-11-19 00:00:44.016305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:37.472 [2024-11-19 00:00:44.016324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6664.428 ms 00:16:37.472 [2024-11-19 00:00:44.016332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.472 [2024-11-19 00:00:44.039927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.472 [2024-11-19 00:00:44.039970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.472 [2024-11-19 00:00:44.039983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.415 ms 00:16:37.472 [2024-11-19 00:00:44.039991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.472 [2024-11-19 00:00:44.040096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.472 [2024-11-19 00:00:44.040104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:37.472 [2024-11-19 00:00:44.040115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:37.472 [2024-11-19 00:00:44.040136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.472 [2024-11-19 00:00:44.082429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.472 [2024-11-19 00:00:44.082600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.472 [2024-11-19 00:00:44.082623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.253 ms 00:16:37.472 [2024-11-19 00:00:44.082630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.472 [2024-11-19 00:00:44.082663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.472 [2024-11-19 00:00:44.082673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.472 [2024-11-19 00:00:44.082681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:37.472 [2024-11-19 00:00:44.082688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.472 [2024-11-19 00:00:44.083120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.472 [2024-11-19 00:00:44.083154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.472 [2024-11-19 00:00:44.083164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:16:37.472 [2024-11-19 00:00:44.083171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.472 [2024-11-19 00:00:44.083263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.472 [2024-11-19 00:00:44.083271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.472 [2024-11-19 00:00:44.083283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:37.472 [2024-11-19 00:00:44.083290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.472 [2024-11-19 00:00:44.095507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.472 [2024-11-19 00:00:44.095535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.472 [2024-11-19 00:00:44.095546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.203 ms 00:16:37.472 [2024-11-19 00:00:44.095554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.472 [2024-11-19 00:00:44.105460] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:37.472 [2024-11-19 00:00:44.111017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.473 [2024-11-19 00:00:44.111045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:37.473 [2024-11-19 00:00:44.111055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.404 ms 00:16:37.473 [2024-11-19 00:00:44.111063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.731 [2024-11-19 00:00:44.191873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.731 [2024-11-19 00:00:44.191907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:37.731 [2024-11-19 00:00:44.191917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.787 ms 00:16:37.731 [2024-11-19 00:00:44.191925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.731 [2024-11-19 00:00:44.192071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.731 [2024-11-19 00:00:44.192084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:37.731 [2024-11-19 00:00:44.192092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:16:37.731 [2024-11-19 00:00:44.192100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.731 [2024-11-19 00:00:44.210953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.731 [2024-11-19 00:00:44.210985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:37.731 [2024-11-19 00:00:44.210994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.807 ms 00:16:37.731 [2024-11-19 00:00:44.211003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.731 [2024-11-19 00:00:44.229165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.731 [2024-11-19 00:00:44.229193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:37.731 [2024-11-19 00:00:44.229203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.135 ms 00:16:37.731 [2024-11-19 00:00:44.229210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.731 [2024-11-19 00:00:44.229654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.731 [2024-11-19 00:00:44.229665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:37.731 [2024-11-19 00:00:44.229672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:16:37.731 [2024-11-19 00:00:44.229680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.731 [2024-11-19 00:00:44.300545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.732 [2024-11-19 00:00:44.300580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:37.732 [2024-11-19 00:00:44.300589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.833 ms 00:16:37.732 [2024-11-19 00:00:44.300598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.732 [2024-11-19 00:00:44.320629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.732 [2024-11-19 00:00:44.320659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:37.732 [2024-11-19 00:00:44.320669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.981 ms 00:16:37.732 [2024-11-19 00:00:44.320679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.732 [2024-11-19 00:00:44.338847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.732 [2024-11-19 00:00:44.338972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:37.732 [2024-11-19 00:00:44.338986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.141 ms 00:16:37.732 [2024-11-19 00:00:44.338994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.732 [2024-11-19 00:00:44.357961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.732 [2024-11-19 00:00:44.358076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:37.732 [2024-11-19 00:00:44.358090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.944 ms 00:16:37.732 [2024-11-19 00:00:44.358098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.732 [2024-11-19 00:00:44.358138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.732 [2024-11-19 00:00:44.358150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:37.732 [2024-11-19 00:00:44.358157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:37.732 [2024-11-19 00:00:44.358165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.732 [2024-11-19 00:00:44.358229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.732 [2024-11-19 00:00:44.358239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:37.732 [2024-11-19 00:00:44.358246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:37.732 [2024-11-19 00:00:44.358255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.732 [2024-11-19 00:00:44.359083] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 7021.920 ms, result 0 00:16:37.732 { 00:16:37.732 "name": "ftl0", 00:16:37.732 "uuid": "4dd6fa88-b4c6-4614-84f8-aad0d21f609b" 00:16:37.732 } 00:16:37.732 00:00:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:37.732 00:00:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:37.732 00:00:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:37.990 00:00:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:37.990 [2024-11-19 00:00:44.679248] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:38.249 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:38.249 Zero copy mechanism will not be used. 00:16:38.249 Running I/O for 4 seconds... 00:16:40.117 759.00 IOPS, 50.40 MiB/s [2024-11-19T00:00:47.743Z] 757.00 IOPS, 50.27 MiB/s [2024-11-19T00:00:49.116Z] 758.00 IOPS, 50.34 MiB/s [2024-11-19T00:00:49.116Z] 766.00 IOPS, 50.87 MiB/s 00:16:42.424 Latency(us) 00:16:42.424 [2024-11-19T00:00:49.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.424 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:42.424 ftl0 : 4.00 766.00 50.87 0.00 0.00 1387.07 441.11 2558.42 00:16:42.424 [2024-11-19T00:00:49.116Z] =================================================================================================================== 00:16:42.424 [2024-11-19T00:00:49.116Z] Total : 766.00 50.87 0.00 0.00 1387.07 441.11 2558.42 00:16:42.425 [2024-11-19 00:00:48.686783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:42.425 { 00:16:42.425 "results": [ 00:16:42.425 { 00:16:42.425 "job": "ftl0", 00:16:42.425 "core_mask": "0x1", 00:16:42.425 "workload": "randwrite", 00:16:42.425 "status": "finished", 00:16:42.425 "queue_depth": 1, 00:16:42.425 "io_size": 69632, 00:16:42.425 "runtime": 4.001327, 00:16:42.425 "iops": 765.9958808665225, 00:16:42.425 "mibps": 50.86691396379251, 00:16:42.425 "io_failed": 0, 00:16:42.425 "io_timeout": 0, 00:16:42.425 "avg_latency_us": 1387.0662738110177, 00:16:42.425 "min_latency_us": 441.10769230769233, 00:16:42.425 "max_latency_us": 2558.424615384615 00:16:42.425 } 00:16:42.425 ], 00:16:42.425 "core_count": 1 00:16:42.425 } 00:16:42.425 00:00:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:42.425 [2024-11-19 00:00:48.799772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:42.425 Running I/O for 4 seconds... 00:16:44.339 7202.00 IOPS, 28.13 MiB/s [2024-11-19T00:00:52.036Z] 6016.00 IOPS, 23.50 MiB/s [2024-11-19T00:00:52.980Z] 5733.00 IOPS, 22.39 MiB/s [2024-11-19T00:00:52.980Z] 5530.75 IOPS, 21.60 MiB/s 00:16:46.288 Latency(us) 00:16:46.288 [2024-11-19T00:00:52.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.288 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.288 ftl0 : 4.03 5522.38 21.57 0.00 0.00 23094.84 362.34 45169.43 00:16:46.288 [2024-11-19T00:00:52.980Z] =================================================================================================================== 00:16:46.288 [2024-11-19T00:00:52.980Z] Total : 5522.38 21.57 0.00 0.00 23094.84 0.00 45169.43 00:16:46.288 [2024-11-19 00:00:52.835245] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:46.288 { 00:16:46.288 "results": [ 00:16:46.288 { 00:16:46.288 "job": "ftl0", 00:16:46.288 "core_mask": "0x1", 00:16:46.288 "workload": "randwrite", 00:16:46.288 "status": "finished", 00:16:46.288 "queue_depth": 128, 00:16:46.288 "io_size": 4096, 00:16:46.288 "runtime": 4.027613, 00:16:46.288 "iops": 5522.377646511718, 00:16:46.288 "mibps": 21.5717876816864, 00:16:46.288 "io_failed": 0, 00:16:46.288 "io_timeout": 0, 00:16:46.288 "avg_latency_us": 23094.83662260588, 00:16:46.288 "min_latency_us": 362.33846153846156, 00:16:46.288 "max_latency_us": 45169.42769230769 00:16:46.288 } 00:16:46.288 ], 00:16:46.288 "core_count": 1 00:16:46.288 } 00:16:46.288 00:00:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:46.288 [2024-11-19 00:00:52.946073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:46.288 Running I/O for 4 seconds... 00:16:48.616 5227.00 IOPS, 20.42 MiB/s [2024-11-19T00:00:56.254Z] 4818.50 IOPS, 18.82 MiB/s [2024-11-19T00:00:57.199Z] 4681.33 IOPS, 18.29 MiB/s [2024-11-19T00:00:57.199Z] 4625.25 IOPS, 18.07 MiB/s 00:16:50.507 Latency(us) 00:16:50.507 [2024-11-19T00:00:57.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.507 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:50.507 Verification LBA range: start 0x0 length 0x1400000 00:16:50.507 ftl0 : 4.02 4637.34 18.11 0.00 0.00 27517.15 374.94 39724.90 00:16:50.507 [2024-11-19T00:00:57.199Z] =================================================================================================================== 00:16:50.507 [2024-11-19T00:00:57.199Z] Total : 4637.34 18.11 0.00 0.00 27517.15 0.00 39724.90 00:16:50.507 { 00:16:50.507 "results": [ 00:16:50.507 { 00:16:50.507 "job": "ftl0", 00:16:50.507 "core_mask": "0x1", 00:16:50.507 "workload": "verify", 00:16:50.507 "status": "finished", 00:16:50.507 "verify_range": { 00:16:50.507 "start": 0, 00:16:50.507 "length": 20971520 00:16:50.507 }, 00:16:50.507 "queue_depth": 128, 00:16:50.507 "io_size": 4096, 00:16:50.507 "runtime": 4.015237, 00:16:50.507 "iops": 4637.335230772181, 00:16:50.507 "mibps": 18.11459074520383, 00:16:50.507 "io_failed": 0, 00:16:50.507 "io_timeout": 0, 00:16:50.507 "avg_latency_us": 27517.151574320415, 00:16:50.507 "min_latency_us": 374.94153846153847, 00:16:50.507 "max_latency_us": 39724.89846153846 00:16:50.507 } 00:16:50.507 ], 00:16:50.507 "core_count": 1 00:16:50.507 } 00:16:50.507 [2024-11-19 00:00:56.978338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:50.507 00:00:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:50.507 [2024-11-19 00:00:57.186404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.507 [2024-11-19 00:00:57.186470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:50.507 [2024-11-19 00:00:57.186487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:50.507 [2024-11-19 00:00:57.186500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.507 [2024-11-19 00:00:57.186524] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:50.507 [2024-11-19 00:00:57.190043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.768 [2024-11-19 00:00:57.190285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:50.768 [2024-11-19 00:00:57.190324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.493 ms 00:16:50.768 [2024-11-19 00:00:57.190334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.768 [2024-11-19 00:00:57.193411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.768 [2024-11-19 00:00:57.193593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:50.768 [2024-11-19 00:00:57.193643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:16:50.768 [2024-11-19 00:00:57.193653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.768 [2024-11-19 00:00:57.408232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.768 [2024-11-19 00:00:57.408293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:50.768 [2024-11-19 00:00:57.408318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 214.540 ms 00:16:50.768 [2024-11-19 00:00:57.408330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.768 [2024-11-19 00:00:57.414506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.768 [2024-11-19 00:00:57.414551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:50.768 [2024-11-19 00:00:57.414567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:16:50.768 [2024-11-19 00:00:57.414576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.768 [2024-11-19 00:00:57.440893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.768 [2024-11-19 00:00:57.440944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:50.768 [2024-11-19 00:00:57.440960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.233 ms 00:16:50.768 [2024-11-19 00:00:57.440969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.031 [2024-11-19 00:00:57.460719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.031 [2024-11-19 00:00:57.460924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:51.031 [2024-11-19 00:00:57.460958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.694 ms 00:16:51.031 [2024-11-19 00:00:57.460968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.031 [2024-11-19 00:00:57.461156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.031 [2024-11-19 00:00:57.461172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:51.031 [2024-11-19 00:00:57.461189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:16:51.031 [2024-11-19 00:00:57.461197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.031 [2024-11-19 00:00:57.487766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.031 [2024-11-19 00:00:57.487961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:51.031 [2024-11-19 00:00:57.487987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.546 ms 00:16:51.031 [2024-11-19 00:00:57.487996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.031 [2024-11-19 00:00:57.513634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.031 [2024-11-19 00:00:57.513681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:51.031 [2024-11-19 00:00:57.513696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.517 ms 00:16:51.031 [2024-11-19 00:00:57.513705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.031 [2024-11-19 00:00:57.539232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.031 [2024-11-19 00:00:57.539277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:51.031 [2024-11-19 00:00:57.539293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.473 ms 00:16:51.031 [2024-11-19 00:00:57.539301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.031 [2024-11-19 00:00:57.564252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.031 [2024-11-19 00:00:57.564300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:51.031 [2024-11-19 00:00:57.564319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.840 ms 00:16:51.031 [2024-11-19 00:00:57.564326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.031 [2024-11-19 00:00:57.564376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:51.031 [2024-11-19 00:00:57.564395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:51.031 [2024-11-19 00:00:57.564678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.564997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:51.032 [2024-11-19 00:00:57.565470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:51.032 [2024-11-19 00:00:57.565482] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4dd6fa88-b4c6-4614-84f8-aad0d21f609b 00:16:51.032 [2024-11-19 00:00:57.565491] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:51.032 [2024-11-19 00:00:57.565502] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:51.032 [2024-11-19 00:00:57.565513] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:51.032 [2024-11-19 00:00:57.565524] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:51.033 [2024-11-19 00:00:57.565546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:51.033 [2024-11-19 00:00:57.565556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:51.033 [2024-11-19 00:00:57.565564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:51.033 [2024-11-19 00:00:57.565575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:51.033 [2024-11-19 00:00:57.565582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:51.033 [2024-11-19 00:00:57.565592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.033 [2024-11-19 00:00:57.565602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:51.033 [2024-11-19 00:00:57.565614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:16:51.033 [2024-11-19 00:00:57.565621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.033 [2024-11-19 00:00:57.580058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.033 [2024-11-19 00:00:57.580104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:51.033 [2024-11-19 00:00:57.580119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.395 ms 00:16:51.033 [2024-11-19 00:00:57.580152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.033 [2024-11-19 00:00:57.580611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.033 [2024-11-19 00:00:57.580632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:51.033 [2024-11-19 00:00:57.580645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:16:51.033 [2024-11-19 00:00:57.580655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.033 [2024-11-19 00:00:57.623261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.033 [2024-11-19 00:00:57.623519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.033 [2024-11-19 00:00:57.623551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.033 [2024-11-19 00:00:57.623560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.033 [2024-11-19 00:00:57.623643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.033 [2024-11-19 00:00:57.623653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.033 [2024-11-19 00:00:57.623666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.033 [2024-11-19 00:00:57.623674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.033 [2024-11-19 00:00:57.623769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.033 [2024-11-19 00:00:57.623785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.033 [2024-11-19 00:00:57.623797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.033 [2024-11-19 00:00:57.623806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.033 [2024-11-19 00:00:57.623827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.033 [2024-11-19 00:00:57.623836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.033 [2024-11-19 00:00:57.623850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.033 [2024-11-19 00:00:57.623861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.033 [2024-11-19 00:00:57.715921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.033 [2024-11-19 00:00:57.716000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.033 [2024-11-19 00:00:57.716022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.033 [2024-11-19 00:00:57.716031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.296 [2024-11-19 00:00:57.792203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.296 [2024-11-19 00:00:57.792271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:51.296 [2024-11-19 00:00:57.792289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.296 [2024-11-19 00:00:57.792300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.296 [2024-11-19 00:00:57.792447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.296 [2024-11-19 00:00:57.792460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:51.296 [2024-11-19 00:00:57.792478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.296 [2024-11-19 00:00:57.792487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.296 [2024-11-19 00:00:57.792542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.296 [2024-11-19 00:00:57.792552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:51.296 [2024-11-19 00:00:57.792564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.296 [2024-11-19 00:00:57.792572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.296 [2024-11-19 00:00:57.792685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.296 [2024-11-19 00:00:57.792699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:51.296 [2024-11-19 00:00:57.792718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.296 [2024-11-19 00:00:57.792726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.296 [2024-11-19 00:00:57.792765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.296 [2024-11-19 00:00:57.792778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:51.296 [2024-11-19 00:00:57.792789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.296 [2024-11-19 00:00:57.792798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.296 [2024-11-19 00:00:57.792856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.296 [2024-11-19 00:00:57.792879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:51.296 [2024-11-19 00:00:57.792892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.296 [2024-11-19 00:00:57.792903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.296 [2024-11-19 00:00:57.792971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.296 [2024-11-19 00:00:57.792993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:51.296 [2024-11-19 00:00:57.793007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.296 [2024-11-19 00:00:57.793015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.296 [2024-11-19 00:00:57.793233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 606.729 ms, result 0 00:16:51.296 true 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73225 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 73225 ']' 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 73225 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73225 00:16:51.296 killing process with pid 73225 00:16:51.296 Received shutdown signal, test time was about 4.000000 seconds 00:16:51.296 00:16:51.296 Latency(us) 00:16:51.296 [2024-11-19T00:00:57.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.296 [2024-11-19T00:00:57.988Z] =================================================================================================================== 00:16:51.296 [2024-11-19T00:00:57.988Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73225' 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 73225 00:16:51.296 00:00:57 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 73225 00:16:55.505 Remove shared memory files 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:55.505 ************************************ 00:16:55.505 END TEST ftl_bdevperf 00:16:55.505 ************************************ 00:16:55.505 00:16:55.505 real 0m28.780s 00:16:55.505 user 0m31.288s 00:16:55.505 sys 0m1.058s 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.505 00:01:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:55.505 00:01:02 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:55.505 00:01:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:55.505 00:01:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.505 00:01:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:55.505 ************************************ 00:16:55.505 START TEST ftl_trim 00:16:55.505 ************************************ 00:16:55.505 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:55.505 * Looking for test storage... 00:16:55.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:55.505 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:55.505 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:16:55.506 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:55.767 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.767 00:01:02 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:55.767 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.767 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:55.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.767 --rc genhtml_branch_coverage=1 00:16:55.767 --rc genhtml_function_coverage=1 00:16:55.767 --rc genhtml_legend=1 00:16:55.767 --rc geninfo_all_blocks=1 00:16:55.767 --rc geninfo_unexecuted_blocks=1 00:16:55.767 00:16:55.768 ' 00:16:55.768 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:55.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.768 --rc genhtml_branch_coverage=1 00:16:55.768 --rc genhtml_function_coverage=1 00:16:55.768 --rc genhtml_legend=1 00:16:55.768 --rc geninfo_all_blocks=1 00:16:55.768 --rc geninfo_unexecuted_blocks=1 00:16:55.768 00:16:55.768 ' 00:16:55.768 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:55.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.768 --rc genhtml_branch_coverage=1 00:16:55.768 --rc genhtml_function_coverage=1 00:16:55.768 --rc genhtml_legend=1 00:16:55.768 --rc geninfo_all_blocks=1 00:16:55.768 --rc geninfo_unexecuted_blocks=1 00:16:55.768 00:16:55.768 ' 00:16:55.768 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:55.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.768 --rc genhtml_branch_coverage=1 00:16:55.768 --rc genhtml_function_coverage=1 00:16:55.768 --rc genhtml_legend=1 00:16:55.768 --rc geninfo_all_blocks=1 00:16:55.768 --rc geninfo_unexecuted_blocks=1 00:16:55.768 00:16:55.768 ' 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73624 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73624 00:16:55.768 00:01:02 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:55.768 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73624 ']' 00:16:55.768 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.768 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.768 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.768 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.768 00:01:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:55.768 [2024-11-19 00:01:02.392548] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:55.768 [2024-11-19 00:01:02.392925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73624 ] 00:16:56.029 [2024-11-19 00:01:02.563523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.029 [2024-11-19 00:01:02.714069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.029 [2024-11-19 00:01:02.714372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.029 [2024-11-19 00:01:02.714468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.971 00:01:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.971 00:01:03 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:56.971 00:01:03 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:56.971 00:01:03 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:56.971 00:01:03 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:56.971 00:01:03 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:56.971 00:01:03 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:56.971 00:01:03 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:57.232 00:01:03 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:57.232 00:01:03 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:57.232 00:01:03 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:57.232 00:01:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:57.232 00:01:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:57.232 00:01:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:57.232 00:01:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:57.232 00:01:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:57.493 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:57.493 { 00:16:57.493 "name": "nvme0n1", 00:16:57.493 "aliases": [ 00:16:57.493 "86103db2-08b4-4679-85f1-b337c5bcd422" 00:16:57.493 ], 00:16:57.493 "product_name": "NVMe disk", 00:16:57.493 "block_size": 4096, 00:16:57.493 "num_blocks": 1310720, 00:16:57.493 "uuid": "86103db2-08b4-4679-85f1-b337c5bcd422", 00:16:57.493 "numa_id": -1, 00:16:57.493 "assigned_rate_limits": { 00:16:57.493 "rw_ios_per_sec": 0, 00:16:57.493 "rw_mbytes_per_sec": 0, 00:16:57.493 "r_mbytes_per_sec": 0, 00:16:57.493 "w_mbytes_per_sec": 0 00:16:57.493 }, 00:16:57.493 "claimed": true, 00:16:57.493 "claim_type": "read_many_write_one", 00:16:57.493 "zoned": false, 00:16:57.493 "supported_io_types": { 00:16:57.493 "read": true, 00:16:57.493 "write": true, 00:16:57.493 "unmap": true, 00:16:57.493 "flush": true, 00:16:57.493 "reset": true, 00:16:57.493 "nvme_admin": true, 00:16:57.493 "nvme_io": true, 00:16:57.493 "nvme_io_md": false, 00:16:57.493 "write_zeroes": true, 00:16:57.493 "zcopy": false, 00:16:57.493 "get_zone_info": false, 00:16:57.493 "zone_management": false, 00:16:57.493 "zone_append": false, 00:16:57.493 "compare": true, 00:16:57.493 "compare_and_write": false, 00:16:57.493 "abort": true, 00:16:57.493 "seek_hole": false, 00:16:57.493 "seek_data": false, 00:16:57.493 "copy": true, 00:16:57.493 "nvme_iov_md": false 00:16:57.493 }, 00:16:57.493 "driver_specific": { 00:16:57.493 "nvme": [ 00:16:57.493 { 00:16:57.493 "pci_address": "0000:00:11.0", 00:16:57.493 "trid": { 00:16:57.493 "trtype": "PCIe", 00:16:57.493 "traddr": "0000:00:11.0" 00:16:57.493 }, 00:16:57.493 "ctrlr_data": { 00:16:57.493 "cntlid": 0, 00:16:57.493 "vendor_id": "0x1b36", 00:16:57.493 "model_number": "QEMU NVMe Ctrl", 00:16:57.493 "serial_number": "12341", 00:16:57.493 "firmware_revision": "8.0.0", 00:16:57.493 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:57.493 "oacs": { 00:16:57.493 "security": 0, 00:16:57.493 "format": 1, 00:16:57.493 "firmware": 0, 00:16:57.493 "ns_manage": 1 00:16:57.493 }, 00:16:57.493 "multi_ctrlr": false, 00:16:57.493 "ana_reporting": false 00:16:57.493 }, 00:16:57.493 "vs": { 00:16:57.493 "nvme_version": "1.4" 00:16:57.493 }, 00:16:57.493 "ns_data": { 00:16:57.493 "id": 1, 00:16:57.493 "can_share": false 00:16:57.493 } 00:16:57.493 } 00:16:57.493 ], 00:16:57.493 "mp_policy": "active_passive" 00:16:57.493 } 00:16:57.493 } 00:16:57.493 ]' 00:16:57.493 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:57.493 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:57.493 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:57.493 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:57.494 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:57.494 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:16:57.494 00:01:04 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:57.494 00:01:04 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:57.494 00:01:04 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:57.494 00:01:04 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:57.494 00:01:04 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:57.755 00:01:04 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=94d5cb87-2707-49d5-b3c3-0045a9b91b2c 00:16:57.755 00:01:04 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:57.755 00:01:04 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94d5cb87-2707-49d5-b3c3-0045a9b91b2c 00:16:58.017 00:01:04 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:58.278 00:01:04 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=82b726b8-329a-48ba-8999-48e3353aa757 00:16:58.278 00:01:04 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 82b726b8-329a-48ba-8999-48e3353aa757 00:16:58.537 00:01:04 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:58.537 00:01:04 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:58.537 00:01:04 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:58.537 00:01:04 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:58.537 00:01:04 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:58.537 00:01:04 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:58.537 00:01:04 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:58.537 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:58.537 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:58.537 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:58.537 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:58.537 00:01:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:58.537 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:58.537 { 00:16:58.537 "name": "ed2f13fb-645f-47f3-9f7c-f98c3da65db9", 00:16:58.537 "aliases": [ 00:16:58.537 "lvs/nvme0n1p0" 00:16:58.537 ], 00:16:58.537 "product_name": "Logical Volume", 00:16:58.537 "block_size": 4096, 00:16:58.537 "num_blocks": 26476544, 00:16:58.537 "uuid": "ed2f13fb-645f-47f3-9f7c-f98c3da65db9", 00:16:58.537 "assigned_rate_limits": { 00:16:58.537 "rw_ios_per_sec": 0, 00:16:58.537 "rw_mbytes_per_sec": 0, 00:16:58.538 "r_mbytes_per_sec": 0, 00:16:58.538 "w_mbytes_per_sec": 0 00:16:58.538 }, 00:16:58.538 "claimed": false, 00:16:58.538 "zoned": false, 00:16:58.538 "supported_io_types": { 00:16:58.538 "read": true, 00:16:58.538 "write": true, 00:16:58.538 "unmap": true, 00:16:58.538 "flush": false, 00:16:58.538 "reset": true, 00:16:58.538 "nvme_admin": false, 00:16:58.538 "nvme_io": false, 00:16:58.538 "nvme_io_md": false, 00:16:58.538 "write_zeroes": true, 00:16:58.538 "zcopy": false, 00:16:58.538 "get_zone_info": false, 00:16:58.538 "zone_management": false, 00:16:58.538 "zone_append": false, 00:16:58.538 "compare": false, 00:16:58.538 "compare_and_write": false, 00:16:58.538 "abort": false, 00:16:58.538 "seek_hole": true, 00:16:58.538 "seek_data": true, 00:16:58.538 "copy": false, 00:16:58.538 "nvme_iov_md": false 00:16:58.538 }, 00:16:58.538 "driver_specific": { 00:16:58.538 "lvol": { 00:16:58.538 "lvol_store_uuid": "82b726b8-329a-48ba-8999-48e3353aa757", 00:16:58.538 "base_bdev": "nvme0n1", 00:16:58.538 "thin_provision": true, 00:16:58.538 "num_allocated_clusters": 0, 00:16:58.538 "snapshot": false, 00:16:58.538 "clone": false, 00:16:58.538 "esnap_clone": false 00:16:58.538 } 00:16:58.538 } 00:16:58.538 } 00:16:58.538 ]' 00:16:58.538 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:58.538 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:58.538 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:58.796 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:58.796 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:58.796 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:58.796 00:01:05 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:58.796 00:01:05 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:58.796 00:01:05 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:59.055 00:01:05 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:59.055 00:01:05 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:59.055 00:01:05 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:59.055 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:59.055 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:59.055 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:59.055 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:59.055 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:59.055 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:59.055 { 00:16:59.055 "name": "ed2f13fb-645f-47f3-9f7c-f98c3da65db9", 00:16:59.055 "aliases": [ 00:16:59.055 "lvs/nvme0n1p0" 00:16:59.055 ], 00:16:59.055 "product_name": "Logical Volume", 00:16:59.055 "block_size": 4096, 00:16:59.055 "num_blocks": 26476544, 00:16:59.055 "uuid": "ed2f13fb-645f-47f3-9f7c-f98c3da65db9", 00:16:59.055 "assigned_rate_limits": { 00:16:59.055 "rw_ios_per_sec": 0, 00:16:59.055 "rw_mbytes_per_sec": 0, 00:16:59.055 "r_mbytes_per_sec": 0, 00:16:59.055 "w_mbytes_per_sec": 0 00:16:59.055 }, 00:16:59.055 "claimed": false, 00:16:59.055 "zoned": false, 00:16:59.055 "supported_io_types": { 00:16:59.055 "read": true, 00:16:59.055 "write": true, 00:16:59.055 "unmap": true, 00:16:59.055 "flush": false, 00:16:59.055 "reset": true, 00:16:59.055 "nvme_admin": false, 00:16:59.055 "nvme_io": false, 00:16:59.055 "nvme_io_md": false, 00:16:59.055 "write_zeroes": true, 00:16:59.055 "zcopy": false, 00:16:59.055 "get_zone_info": false, 00:16:59.055 "zone_management": false, 00:16:59.055 "zone_append": false, 00:16:59.055 "compare": false, 00:16:59.055 "compare_and_write": false, 00:16:59.055 "abort": false, 00:16:59.055 "seek_hole": true, 00:16:59.055 "seek_data": true, 00:16:59.055 "copy": false, 00:16:59.055 "nvme_iov_md": false 00:16:59.055 }, 00:16:59.055 "driver_specific": { 00:16:59.055 "lvol": { 00:16:59.055 "lvol_store_uuid": "82b726b8-329a-48ba-8999-48e3353aa757", 00:16:59.055 "base_bdev": "nvme0n1", 00:16:59.055 "thin_provision": true, 00:16:59.055 "num_allocated_clusters": 0, 00:16:59.055 "snapshot": false, 00:16:59.055 "clone": false, 00:16:59.055 "esnap_clone": false 00:16:59.055 } 00:16:59.055 } 00:16:59.055 } 00:16:59.056 ]' 00:16:59.056 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:59.056 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:59.056 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:59.314 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:59.314 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:59.314 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:59.314 00:01:05 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:59.314 00:01:05 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:59.314 00:01:05 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:59.314 00:01:05 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:59.314 00:01:05 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:59.314 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:59.314 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:59.314 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:59.314 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:59.314 00:01:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ed2f13fb-645f-47f3-9f7c-f98c3da65db9 00:16:59.573 00:01:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:59.573 { 00:16:59.573 "name": "ed2f13fb-645f-47f3-9f7c-f98c3da65db9", 00:16:59.573 "aliases": [ 00:16:59.573 "lvs/nvme0n1p0" 00:16:59.573 ], 00:16:59.573 "product_name": "Logical Volume", 00:16:59.573 "block_size": 4096, 00:16:59.573 "num_blocks": 26476544, 00:16:59.573 "uuid": "ed2f13fb-645f-47f3-9f7c-f98c3da65db9", 00:16:59.573 "assigned_rate_limits": { 00:16:59.573 "rw_ios_per_sec": 0, 00:16:59.573 "rw_mbytes_per_sec": 0, 00:16:59.573 "r_mbytes_per_sec": 0, 00:16:59.573 "w_mbytes_per_sec": 0 00:16:59.573 }, 00:16:59.573 "claimed": false, 00:16:59.573 "zoned": false, 00:16:59.573 "supported_io_types": { 00:16:59.573 "read": true, 00:16:59.573 "write": true, 00:16:59.573 "unmap": true, 00:16:59.573 "flush": false, 00:16:59.573 "reset": true, 00:16:59.573 "nvme_admin": false, 00:16:59.573 "nvme_io": false, 00:16:59.573 "nvme_io_md": false, 00:16:59.573 "write_zeroes": true, 00:16:59.573 "zcopy": false, 00:16:59.573 "get_zone_info": false, 00:16:59.573 "zone_management": false, 00:16:59.573 "zone_append": false, 00:16:59.573 "compare": false, 00:16:59.573 "compare_and_write": false, 00:16:59.573 "abort": false, 00:16:59.573 "seek_hole": true, 00:16:59.573 "seek_data": true, 00:16:59.573 "copy": false, 00:16:59.573 "nvme_iov_md": false 00:16:59.573 }, 00:16:59.573 "driver_specific": { 00:16:59.573 "lvol": { 00:16:59.573 "lvol_store_uuid": "82b726b8-329a-48ba-8999-48e3353aa757", 00:16:59.573 "base_bdev": "nvme0n1", 00:16:59.573 "thin_provision": true, 00:16:59.573 "num_allocated_clusters": 0, 00:16:59.573 "snapshot": false, 00:16:59.573 "clone": false, 00:16:59.573 "esnap_clone": false 00:16:59.573 } 00:16:59.573 } 00:16:59.573 } 00:16:59.573 ]' 00:16:59.573 00:01:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:59.573 00:01:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:59.573 00:01:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:59.573 00:01:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:59.573 00:01:06 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:59.573 00:01:06 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:59.573 00:01:06 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:59.573 00:01:06 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ed2f13fb-645f-47f3-9f7c-f98c3da65db9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:59.832 [2024-11-19 00:01:06.401187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.832 [2024-11-19 00:01:06.401232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:59.832 [2024-11-19 00:01:06.401248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:59.832 [2024-11-19 00:01:06.401255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.832 [2024-11-19 00:01:06.403611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.832 [2024-11-19 00:01:06.403641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.832 [2024-11-19 00:01:06.403650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.319 ms 00:16:59.832 [2024-11-19 00:01:06.403657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.832 [2024-11-19 00:01:06.403728] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:59.832 [2024-11-19 00:01:06.404318] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:59.832 [2024-11-19 00:01:06.404395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.832 [2024-11-19 00:01:06.404403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.832 [2024-11-19 00:01:06.404412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:16:59.832 [2024-11-19 00:01:06.404418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.832 [2024-11-19 00:01:06.404612] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 00376b2c-fdd4-4b48-80c2-de968aa5cce4 00:16:59.832 [2024-11-19 00:01:06.405870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.832 [2024-11-19 00:01:06.405897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:59.832 [2024-11-19 00:01:06.405906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:59.832 [2024-11-19 00:01:06.405915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.832 [2024-11-19 00:01:06.412694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.832 [2024-11-19 00:01:06.412720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.832 [2024-11-19 00:01:06.412733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.704 ms 00:16:59.832 [2024-11-19 00:01:06.412740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.832 [2024-11-19 00:01:06.412842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.832 [2024-11-19 00:01:06.412853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.832 [2024-11-19 00:01:06.412860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:59.832 [2024-11-19 00:01:06.412871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.832 [2024-11-19 00:01:06.412903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.832 [2024-11-19 00:01:06.412911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:59.832 [2024-11-19 00:01:06.412917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:59.832 [2024-11-19 00:01:06.412924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.832 [2024-11-19 00:01:06.412950] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:59.832 [2024-11-19 00:01:06.416200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.832 [2024-11-19 00:01:06.416224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.832 [2024-11-19 00:01:06.416236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:16:59.832 [2024-11-19 00:01:06.416242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.832 [2024-11-19 00:01:06.416290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.832 [2024-11-19 00:01:06.416298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:59.832 [2024-11-19 00:01:06.416307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:59.832 [2024-11-19 00:01:06.416324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.832 [2024-11-19 00:01:06.416350] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:59.832 [2024-11-19 00:01:06.416459] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:59.832 [2024-11-19 00:01:06.416473] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:59.832 [2024-11-19 00:01:06.416481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:59.832 [2024-11-19 00:01:06.416491] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:59.833 [2024-11-19 00:01:06.416498] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:59.833 [2024-11-19 00:01:06.416506] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:59.833 [2024-11-19 00:01:06.416513] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:59.833 [2024-11-19 00:01:06.416520] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:59.833 [2024-11-19 00:01:06.416527] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:59.833 [2024-11-19 00:01:06.416535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.833 [2024-11-19 00:01:06.416541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:59.833 [2024-11-19 00:01:06.416549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:16:59.833 [2024-11-19 00:01:06.416555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.833 [2024-11-19 00:01:06.416634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.833 [2024-11-19 00:01:06.416641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:59.833 [2024-11-19 00:01:06.416650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:59.833 [2024-11-19 00:01:06.416655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.833 [2024-11-19 00:01:06.416752] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:59.833 [2024-11-19 00:01:06.416760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:59.833 [2024-11-19 00:01:06.416767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.833 [2024-11-19 00:01:06.416774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:59.833 [2024-11-19 00:01:06.416786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:59.833 [2024-11-19 00:01:06.416798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:59.833 [2024-11-19 00:01:06.416806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.833 [2024-11-19 00:01:06.416817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:59.833 [2024-11-19 00:01:06.416824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:59.833 [2024-11-19 00:01:06.416830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.833 [2024-11-19 00:01:06.416836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:59.833 [2024-11-19 00:01:06.416843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:59.833 [2024-11-19 00:01:06.416849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:59.833 [2024-11-19 00:01:06.416862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:59.833 [2024-11-19 00:01:06.416868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:59.833 [2024-11-19 00:01:06.416881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.833 [2024-11-19 00:01:06.416894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:59.833 [2024-11-19 00:01:06.416902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.833 [2024-11-19 00:01:06.416914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:59.833 [2024-11-19 00:01:06.416922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.833 [2024-11-19 00:01:06.416933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:59.833 [2024-11-19 00:01:06.416938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.833 [2024-11-19 00:01:06.416950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:59.833 [2024-11-19 00:01:06.416957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:59.833 [2024-11-19 00:01:06.416963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.833 [2024-11-19 00:01:06.416969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:59.833 [2024-11-19 00:01:06.416974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:59.833 [2024-11-19 00:01:06.416981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.833 [2024-11-19 00:01:06.416986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:59.833 [2024-11-19 00:01:06.416992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:59.833 [2024-11-19 00:01:06.416997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.833 [2024-11-19 00:01:06.417004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:59.833 [2024-11-19 00:01:06.417009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:59.833 [2024-11-19 00:01:06.417015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.833 [2024-11-19 00:01:06.417019] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:59.833 [2024-11-19 00:01:06.417026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:59.833 [2024-11-19 00:01:06.417032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.833 [2024-11-19 00:01:06.417039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.833 [2024-11-19 00:01:06.417045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:59.833 [2024-11-19 00:01:06.417054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:59.833 [2024-11-19 00:01:06.417059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:59.833 [2024-11-19 00:01:06.417073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:59.833 [2024-11-19 00:01:06.417078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:59.833 [2024-11-19 00:01:06.417085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:59.833 [2024-11-19 00:01:06.417093] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:59.833 [2024-11-19 00:01:06.417102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.833 [2024-11-19 00:01:06.417110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:59.833 [2024-11-19 00:01:06.417117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:59.833 [2024-11-19 00:01:06.417135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:59.833 [2024-11-19 00:01:06.417143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:59.833 [2024-11-19 00:01:06.417149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:59.833 [2024-11-19 00:01:06.417157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:59.833 [2024-11-19 00:01:06.417163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:59.833 [2024-11-19 00:01:06.417171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:59.833 [2024-11-19 00:01:06.417177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:59.833 [2024-11-19 00:01:06.417186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:59.833 [2024-11-19 00:01:06.417191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:59.833 [2024-11-19 00:01:06.417198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:59.833 [2024-11-19 00:01:06.417204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:59.833 [2024-11-19 00:01:06.417211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:59.833 [2024-11-19 00:01:06.417217] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:59.833 [2024-11-19 00:01:06.417229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.833 [2024-11-19 00:01:06.417236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:59.833 [2024-11-19 00:01:06.417243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:59.833 [2024-11-19 00:01:06.417248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:59.833 [2024-11-19 00:01:06.417256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:59.833 [2024-11-19 00:01:06.417262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.833 [2024-11-19 00:01:06.417270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:59.833 [2024-11-19 00:01:06.417276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:16:59.833 [2024-11-19 00:01:06.417283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.833 [2024-11-19 00:01:06.417360] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:59.833 [2024-11-19 00:01:06.417373] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:02.373 [2024-11-19 00:01:08.862691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.373 [2024-11-19 00:01:08.862755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:02.373 [2024-11-19 00:01:08.862772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2445.320 ms 00:17:02.373 [2024-11-19 00:01:08.862782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.373 [2024-11-19 00:01:08.891008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.373 [2024-11-19 00:01:08.891058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.373 [2024-11-19 00:01:08.891073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.975 ms 00:17:02.373 [2024-11-19 00:01:08.891084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.374 [2024-11-19 00:01:08.891247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.374 [2024-11-19 00:01:08.891262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:02.374 [2024-11-19 00:01:08.891272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:02.374 [2024-11-19 00:01:08.891286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.374 [2024-11-19 00:01:08.934348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.374 [2024-11-19 00:01:08.934391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.374 [2024-11-19 00:01:08.934404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.012 ms 00:17:02.374 [2024-11-19 00:01:08.934415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.374 [2024-11-19 00:01:08.934499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.374 [2024-11-19 00:01:08.934513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.374 [2024-11-19 00:01:08.934521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:02.374 [2024-11-19 00:01:08.934530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.374 [2024-11-19 00:01:08.934945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.374 [2024-11-19 00:01:08.934966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.374 [2024-11-19 00:01:08.934975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:17:02.374 [2024-11-19 00:01:08.934984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.374 [2024-11-19 00:01:08.935097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.374 [2024-11-19 00:01:08.935115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.374 [2024-11-19 00:01:08.935148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:02.374 [2024-11-19 00:01:08.935161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.374 [2024-11-19 00:01:08.952508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.374 [2024-11-19 00:01:08.952699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.374 [2024-11-19 00:01:08.952716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.298 ms 00:17:02.374 [2024-11-19 00:01:08.952726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.374 [2024-11-19 00:01:08.964997] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:02.374 [2024-11-19 00:01:08.982275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.374 [2024-11-19 00:01:08.982310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:02.374 [2024-11-19 00:01:08.982323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.442 ms 00:17:02.374 [2024-11-19 00:01:08.982331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.374 [2024-11-19 00:01:09.049582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.374 [2024-11-19 00:01:09.049628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:02.374 [2024-11-19 00:01:09.049642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.169 ms 00:17:02.374 [2024-11-19 00:01:09.049651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.374 [2024-11-19 00:01:09.049875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.374 [2024-11-19 00:01:09.049887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:02.374 [2024-11-19 00:01:09.049901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:02.374 [2024-11-19 00:01:09.049910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.635 [2024-11-19 00:01:09.073205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.635 [2024-11-19 00:01:09.073238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:02.635 [2024-11-19 00:01:09.073251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.260 ms 00:17:02.635 [2024-11-19 00:01:09.073258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.635 [2024-11-19 00:01:09.095487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.635 [2024-11-19 00:01:09.095657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:02.635 [2024-11-19 00:01:09.095678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.162 ms 00:17:02.635 [2024-11-19 00:01:09.095686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.635 [2024-11-19 00:01:09.096571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.635 [2024-11-19 00:01:09.096609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:02.635 [2024-11-19 00:01:09.096624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:17:02.635 [2024-11-19 00:01:09.096632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.635 [2024-11-19 00:01:09.180026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.635 [2024-11-19 00:01:09.180064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:02.635 [2024-11-19 00:01:09.180084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.350 ms 00:17:02.635 [2024-11-19 00:01:09.180093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.635 [2024-11-19 00:01:09.205044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.635 [2024-11-19 00:01:09.205075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:02.635 [2024-11-19 00:01:09.205089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.841 ms 00:17:02.635 [2024-11-19 00:01:09.205097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.635 [2024-11-19 00:01:09.227875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.635 [2024-11-19 00:01:09.227905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:02.635 [2024-11-19 00:01:09.227918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.718 ms 00:17:02.635 [2024-11-19 00:01:09.227925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.636 [2024-11-19 00:01:09.250854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.636 [2024-11-19 00:01:09.250997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:02.636 [2024-11-19 00:01:09.251018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.874 ms 00:17:02.636 [2024-11-19 00:01:09.251038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.636 [2024-11-19 00:01:09.251097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.636 [2024-11-19 00:01:09.251109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:02.636 [2024-11-19 00:01:09.251138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:02.636 [2024-11-19 00:01:09.251146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.636 [2024-11-19 00:01:09.251225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.636 [2024-11-19 00:01:09.251235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:02.636 [2024-11-19 00:01:09.251245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:02.636 [2024-11-19 00:01:09.251252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.636 [2024-11-19 00:01:09.252116] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.636 [2024-11-19 00:01:09.254964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2850.631 ms, result 0 00:17:02.636 [2024-11-19 00:01:09.255924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.636 { 00:17:02.636 "name": "ftl0", 00:17:02.636 "uuid": "00376b2c-fdd4-4b48-80c2-de968aa5cce4" 00:17:02.636 } 00:17:02.636 00:01:09 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:02.636 00:01:09 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:02.636 00:01:09 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:02.636 00:01:09 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:17:02.636 00:01:09 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:02.636 00:01:09 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:02.636 00:01:09 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:02.897 00:01:09 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:03.158 [ 00:17:03.158 { 00:17:03.158 "name": "ftl0", 00:17:03.158 "aliases": [ 00:17:03.158 "00376b2c-fdd4-4b48-80c2-de968aa5cce4" 00:17:03.158 ], 00:17:03.158 "product_name": "FTL disk", 00:17:03.158 "block_size": 4096, 00:17:03.158 "num_blocks": 23592960, 00:17:03.158 "uuid": "00376b2c-fdd4-4b48-80c2-de968aa5cce4", 00:17:03.158 "assigned_rate_limits": { 00:17:03.158 "rw_ios_per_sec": 0, 00:17:03.158 "rw_mbytes_per_sec": 0, 00:17:03.158 "r_mbytes_per_sec": 0, 00:17:03.158 "w_mbytes_per_sec": 0 00:17:03.158 }, 00:17:03.158 "claimed": false, 00:17:03.158 "zoned": false, 00:17:03.158 "supported_io_types": { 00:17:03.158 "read": true, 00:17:03.158 "write": true, 00:17:03.158 "unmap": true, 00:17:03.158 "flush": true, 00:17:03.158 "reset": false, 00:17:03.158 "nvme_admin": false, 00:17:03.158 "nvme_io": false, 00:17:03.158 "nvme_io_md": false, 00:17:03.158 "write_zeroes": true, 00:17:03.158 "zcopy": false, 00:17:03.158 "get_zone_info": false, 00:17:03.158 "zone_management": false, 00:17:03.158 "zone_append": false, 00:17:03.158 "compare": false, 00:17:03.158 "compare_and_write": false, 00:17:03.158 "abort": false, 00:17:03.158 "seek_hole": false, 00:17:03.158 "seek_data": false, 00:17:03.158 "copy": false, 00:17:03.158 "nvme_iov_md": false 00:17:03.158 }, 00:17:03.158 "driver_specific": { 00:17:03.158 "ftl": { 00:17:03.158 "base_bdev": "ed2f13fb-645f-47f3-9f7c-f98c3da65db9", 00:17:03.158 "cache": "nvc0n1p0" 00:17:03.158 } 00:17:03.158 } 00:17:03.158 } 00:17:03.158 ] 00:17:03.158 00:01:09 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:17:03.158 00:01:09 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:03.158 00:01:09 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:03.420 00:01:09 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:03.420 00:01:09 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:03.420 00:01:10 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:03.420 { 00:17:03.420 "name": "ftl0", 00:17:03.420 "aliases": [ 00:17:03.420 "00376b2c-fdd4-4b48-80c2-de968aa5cce4" 00:17:03.420 ], 00:17:03.420 "product_name": "FTL disk", 00:17:03.420 "block_size": 4096, 00:17:03.420 "num_blocks": 23592960, 00:17:03.420 "uuid": "00376b2c-fdd4-4b48-80c2-de968aa5cce4", 00:17:03.420 "assigned_rate_limits": { 00:17:03.420 "rw_ios_per_sec": 0, 00:17:03.420 "rw_mbytes_per_sec": 0, 00:17:03.420 "r_mbytes_per_sec": 0, 00:17:03.420 "w_mbytes_per_sec": 0 00:17:03.420 }, 00:17:03.420 "claimed": false, 00:17:03.420 "zoned": false, 00:17:03.420 "supported_io_types": { 00:17:03.420 "read": true, 00:17:03.420 "write": true, 00:17:03.420 "unmap": true, 00:17:03.420 "flush": true, 00:17:03.420 "reset": false, 00:17:03.420 "nvme_admin": false, 00:17:03.420 "nvme_io": false, 00:17:03.420 "nvme_io_md": false, 00:17:03.420 "write_zeroes": true, 00:17:03.420 "zcopy": false, 00:17:03.420 "get_zone_info": false, 00:17:03.420 "zone_management": false, 00:17:03.420 "zone_append": false, 00:17:03.420 "compare": false, 00:17:03.420 "compare_and_write": false, 00:17:03.420 "abort": false, 00:17:03.420 "seek_hole": false, 00:17:03.420 "seek_data": false, 00:17:03.420 "copy": false, 00:17:03.420 "nvme_iov_md": false 00:17:03.420 }, 00:17:03.420 "driver_specific": { 00:17:03.420 "ftl": { 00:17:03.420 "base_bdev": "ed2f13fb-645f-47f3-9f7c-f98c3da65db9", 00:17:03.420 "cache": "nvc0n1p0" 00:17:03.420 } 00:17:03.420 } 00:17:03.420 } 00:17:03.420 ]' 00:17:03.420 00:01:10 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:03.682 00:01:10 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:03.682 00:01:10 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:03.682 [2024-11-19 00:01:10.295532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.682 [2024-11-19 00:01:10.295578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:03.682 [2024-11-19 00:01:10.295594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:03.682 [2024-11-19 00:01:10.295607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.682 [2024-11-19 00:01:10.295638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:03.682 [2024-11-19 00:01:10.298456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.682 [2024-11-19 00:01:10.298483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:03.682 [2024-11-19 00:01:10.298500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:17:03.682 [2024-11-19 00:01:10.298509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.682 [2024-11-19 00:01:10.299095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.682 [2024-11-19 00:01:10.299117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:03.682 [2024-11-19 00:01:10.299145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:17:03.682 [2024-11-19 00:01:10.299152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.682 [2024-11-19 00:01:10.302801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.682 [2024-11-19 00:01:10.302826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:03.682 [2024-11-19 00:01:10.302838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.614 ms 00:17:03.682 [2024-11-19 00:01:10.302846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.682 [2024-11-19 00:01:10.309819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.682 [2024-11-19 00:01:10.309963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:03.682 [2024-11-19 00:01:10.309982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.929 ms 00:17:03.682 [2024-11-19 00:01:10.309991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.682 [2024-11-19 00:01:10.334265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.682 [2024-11-19 00:01:10.334295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:03.682 [2024-11-19 00:01:10.334310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.185 ms 00:17:03.682 [2024-11-19 00:01:10.334318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.682 [2024-11-19 00:01:10.350426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.682 [2024-11-19 00:01:10.350456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:03.682 [2024-11-19 00:01:10.350469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.048 ms 00:17:03.682 [2024-11-19 00:01:10.350479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.682 [2024-11-19 00:01:10.350677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.682 [2024-11-19 00:01:10.350688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:03.682 [2024-11-19 00:01:10.350699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:17:03.682 [2024-11-19 00:01:10.350707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.945 [2024-11-19 00:01:10.373913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.945 [2024-11-19 00:01:10.373942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:03.945 [2024-11-19 00:01:10.373954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.172 ms 00:17:03.945 [2024-11-19 00:01:10.373961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.945 [2024-11-19 00:01:10.396869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.945 [2024-11-19 00:01:10.396986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:03.945 [2024-11-19 00:01:10.397008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.850 ms 00:17:03.945 [2024-11-19 00:01:10.397015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.945 [2024-11-19 00:01:10.419275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.945 [2024-11-19 00:01:10.419305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:03.945 [2024-11-19 00:01:10.419316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.206 ms 00:17:03.945 [2024-11-19 00:01:10.419324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.945 [2024-11-19 00:01:10.441424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.945 [2024-11-19 00:01:10.441453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:03.945 [2024-11-19 00:01:10.441465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.992 ms 00:17:03.945 [2024-11-19 00:01:10.441473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.945 [2024-11-19 00:01:10.441531] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:03.945 [2024-11-19 00:01:10.441547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:03.945 [2024-11-19 00:01:10.441749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.441999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:03.946 [2024-11-19 00:01:10.442400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:03.947 [2024-11-19 00:01:10.442407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:03.947 [2024-11-19 00:01:10.442416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:03.947 [2024-11-19 00:01:10.442423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:03.947 [2024-11-19 00:01:10.442434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:03.947 [2024-11-19 00:01:10.442442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:03.947 [2024-11-19 00:01:10.442452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:03.947 [2024-11-19 00:01:10.442467] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:03.947 [2024-11-19 00:01:10.442479] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00376b2c-fdd4-4b48-80c2-de968aa5cce4 00:17:03.947 [2024-11-19 00:01:10.442487] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:03.947 [2024-11-19 00:01:10.442496] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:03.947 [2024-11-19 00:01:10.442503] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:03.947 [2024-11-19 00:01:10.442512] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:03.947 [2024-11-19 00:01:10.442521] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:03.947 [2024-11-19 00:01:10.442531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:03.947 [2024-11-19 00:01:10.442538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:03.947 [2024-11-19 00:01:10.442546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:03.947 [2024-11-19 00:01:10.442555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:03.947 [2024-11-19 00:01:10.442564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.947 [2024-11-19 00:01:10.442572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:03.947 [2024-11-19 00:01:10.442581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:17:03.947 [2024-11-19 00:01:10.442588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.947 [2024-11-19 00:01:10.455840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.947 [2024-11-19 00:01:10.455869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:03.947 [2024-11-19 00:01:10.455886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.202 ms 00:17:03.947 [2024-11-19 00:01:10.455895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.947 [2024-11-19 00:01:10.456304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.947 [2024-11-19 00:01:10.456315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:03.947 [2024-11-19 00:01:10.456327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:17:03.947 [2024-11-19 00:01:10.456335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.947 [2024-11-19 00:01:10.503179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.947 [2024-11-19 00:01:10.503211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.947 [2024-11-19 00:01:10.503224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.947 [2024-11-19 00:01:10.503232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.947 [2024-11-19 00:01:10.503338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.947 [2024-11-19 00:01:10.503348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.947 [2024-11-19 00:01:10.503358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.947 [2024-11-19 00:01:10.503366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.947 [2024-11-19 00:01:10.503423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.947 [2024-11-19 00:01:10.503433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.947 [2024-11-19 00:01:10.503455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.947 [2024-11-19 00:01:10.503462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.947 [2024-11-19 00:01:10.503493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.947 [2024-11-19 00:01:10.503501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.947 [2024-11-19 00:01:10.503510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.947 [2024-11-19 00:01:10.503517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.947 [2024-11-19 00:01:10.589716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.947 [2024-11-19 00:01:10.589754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.947 [2024-11-19 00:01:10.589767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.947 [2024-11-19 00:01:10.589774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.209 [2024-11-19 00:01:10.655559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.209 [2024-11-19 00:01:10.655596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.209 [2024-11-19 00:01:10.655608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.209 [2024-11-19 00:01:10.655616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.209 [2024-11-19 00:01:10.655711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.209 [2024-11-19 00:01:10.655721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.209 [2024-11-19 00:01:10.655746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.209 [2024-11-19 00:01:10.655757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.209 [2024-11-19 00:01:10.655813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.209 [2024-11-19 00:01:10.655822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.209 [2024-11-19 00:01:10.655832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.209 [2024-11-19 00:01:10.655839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.209 [2024-11-19 00:01:10.655951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.209 [2024-11-19 00:01:10.655961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.209 [2024-11-19 00:01:10.655985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.209 [2024-11-19 00:01:10.655992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.209 [2024-11-19 00:01:10.656050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.209 [2024-11-19 00:01:10.656060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:04.209 [2024-11-19 00:01:10.656069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.209 [2024-11-19 00:01:10.656077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.209 [2024-11-19 00:01:10.656153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.209 [2024-11-19 00:01:10.656162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.209 [2024-11-19 00:01:10.656174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.209 [2024-11-19 00:01:10.656181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.209 [2024-11-19 00:01:10.656245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.209 [2024-11-19 00:01:10.656256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.209 [2024-11-19 00:01:10.656265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.209 [2024-11-19 00:01:10.656273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.209 [2024-11-19 00:01:10.656466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 360.916 ms, result 0 00:17:04.209 true 00:17:04.209 00:01:10 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73624 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73624 ']' 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73624 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73624 00:17:04.209 killing process with pid 73624 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73624' 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73624 00:17:04.209 00:01:10 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73624 00:17:10.798 00:01:16 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:10.798 65536+0 records in 00:17:10.798 65536+0 records out 00:17:10.798 268435456 bytes (268 MB, 256 MiB) copied, 0.793379 s, 338 MB/s 00:17:10.798 00:01:17 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:10.798 [2024-11-19 00:01:17.242755] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:10.798 [2024-11-19 00:01:17.243076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73812 ] 00:17:10.798 [2024-11-19 00:01:17.402759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.058 [2024-11-19 00:01:17.512408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.320 [2024-11-19 00:01:17.820012] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:11.320 [2024-11-19 00:01:17.820097] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:11.320 [2024-11-19 00:01:17.986056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.320 [2024-11-19 00:01:17.986140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:11.320 [2024-11-19 00:01:17.986160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:11.320 [2024-11-19 00:01:17.986170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.320 [2024-11-19 00:01:17.989424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.320 [2024-11-19 00:01:17.989659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:11.320 [2024-11-19 00:01:17.989681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:17:11.320 [2024-11-19 00:01:17.989692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.320 [2024-11-19 00:01:17.990208] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:11.320 [2024-11-19 00:01:17.991008] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:11.320 [2024-11-19 00:01:17.991052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.320 [2024-11-19 00:01:17.991062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:11.320 [2024-11-19 00:01:17.991073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:17:11.320 [2024-11-19 00:01:17.991082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.320 [2024-11-19 00:01:17.993422] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:11.320 [2024-11-19 00:01:18.008675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.582 [2024-11-19 00:01:18.008902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:11.582 [2024-11-19 00:01:18.008926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.255 ms 00:17:11.582 [2024-11-19 00:01:18.008935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.582 [2024-11-19 00:01:18.009046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.582 [2024-11-19 00:01:18.009059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:11.582 [2024-11-19 00:01:18.009069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:11.582 [2024-11-19 00:01:18.009078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.582 [2024-11-19 00:01:18.020602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.582 [2024-11-19 00:01:18.020643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:11.582 [2024-11-19 00:01:18.020656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.454 ms 00:17:11.582 [2024-11-19 00:01:18.020664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.582 [2024-11-19 00:01:18.020792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.582 [2024-11-19 00:01:18.020804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:11.582 [2024-11-19 00:01:18.020813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:11.582 [2024-11-19 00:01:18.020823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.582 [2024-11-19 00:01:18.020850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.582 [2024-11-19 00:01:18.020863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:11.582 [2024-11-19 00:01:18.020872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:11.582 [2024-11-19 00:01:18.020881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.583 [2024-11-19 00:01:18.020902] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:11.583 [2024-11-19 00:01:18.025555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.583 [2024-11-19 00:01:18.025596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:11.583 [2024-11-19 00:01:18.025609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.658 ms 00:17:11.583 [2024-11-19 00:01:18.025617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.583 [2024-11-19 00:01:18.025676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.583 [2024-11-19 00:01:18.025686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:11.583 [2024-11-19 00:01:18.025696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:11.583 [2024-11-19 00:01:18.025704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.583 [2024-11-19 00:01:18.025725] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:11.583 [2024-11-19 00:01:18.025753] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:11.583 [2024-11-19 00:01:18.025795] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:11.583 [2024-11-19 00:01:18.025814] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:11.583 [2024-11-19 00:01:18.025928] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:11.583 [2024-11-19 00:01:18.025941] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:11.583 [2024-11-19 00:01:18.025953] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:11.583 [2024-11-19 00:01:18.025963] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:11.583 [2024-11-19 00:01:18.025977] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:11.583 [2024-11-19 00:01:18.025986] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:11.583 [2024-11-19 00:01:18.025995] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:11.583 [2024-11-19 00:01:18.026003] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:11.583 [2024-11-19 00:01:18.026012] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:11.583 [2024-11-19 00:01:18.026021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.583 [2024-11-19 00:01:18.026029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:11.583 [2024-11-19 00:01:18.026040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:11.583 [2024-11-19 00:01:18.026048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.583 [2024-11-19 00:01:18.026176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.583 [2024-11-19 00:01:18.026188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:11.583 [2024-11-19 00:01:18.026201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:17:11.583 [2024-11-19 00:01:18.026208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.583 [2024-11-19 00:01:18.026316] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:11.583 [2024-11-19 00:01:18.026331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:11.583 [2024-11-19 00:01:18.026342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:11.583 [2024-11-19 00:01:18.026351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:11.583 [2024-11-19 00:01:18.026368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:11.583 [2024-11-19 00:01:18.026383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:11.583 [2024-11-19 00:01:18.026392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:11.583 [2024-11-19 00:01:18.026408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:11.583 [2024-11-19 00:01:18.026416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:11.583 [2024-11-19 00:01:18.026423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:11.583 [2024-11-19 00:01:18.026440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:11.583 [2024-11-19 00:01:18.026452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:11.583 [2024-11-19 00:01:18.026460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:11.583 [2024-11-19 00:01:18.026475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:11.583 [2024-11-19 00:01:18.026482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:11.583 [2024-11-19 00:01:18.026496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.583 [2024-11-19 00:01:18.026513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:11.583 [2024-11-19 00:01:18.026521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.583 [2024-11-19 00:01:18.026535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:11.583 [2024-11-19 00:01:18.026542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.583 [2024-11-19 00:01:18.026555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:11.583 [2024-11-19 00:01:18.026562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.583 [2024-11-19 00:01:18.026575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:11.583 [2024-11-19 00:01:18.026582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:11.583 [2024-11-19 00:01:18.026596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:11.583 [2024-11-19 00:01:18.026602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:11.583 [2024-11-19 00:01:18.026609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:11.583 [2024-11-19 00:01:18.026616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:11.583 [2024-11-19 00:01:18.026623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:11.583 [2024-11-19 00:01:18.026629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:11.583 [2024-11-19 00:01:18.026645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:11.583 [2024-11-19 00:01:18.026653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026659] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:11.583 [2024-11-19 00:01:18.026667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:11.583 [2024-11-19 00:01:18.026675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:11.583 [2024-11-19 00:01:18.026687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.583 [2024-11-19 00:01:18.026697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:11.583 [2024-11-19 00:01:18.026705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:11.583 [2024-11-19 00:01:18.026712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:11.583 [2024-11-19 00:01:18.026719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:11.583 [2024-11-19 00:01:18.026726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:11.583 [2024-11-19 00:01:18.026733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:11.584 [2024-11-19 00:01:18.026742] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:11.584 [2024-11-19 00:01:18.026752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:11.584 [2024-11-19 00:01:18.026762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:11.584 [2024-11-19 00:01:18.026770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:11.584 [2024-11-19 00:01:18.026777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:11.584 [2024-11-19 00:01:18.026785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:11.584 [2024-11-19 00:01:18.026792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:11.584 [2024-11-19 00:01:18.026798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:11.584 [2024-11-19 00:01:18.026806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:11.584 [2024-11-19 00:01:18.026814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:11.584 [2024-11-19 00:01:18.026823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:11.584 [2024-11-19 00:01:18.026830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:11.584 [2024-11-19 00:01:18.026837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:11.584 [2024-11-19 00:01:18.026845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:11.584 [2024-11-19 00:01:18.026852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:11.584 [2024-11-19 00:01:18.026862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:11.584 [2024-11-19 00:01:18.026870] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:11.584 [2024-11-19 00:01:18.026879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:11.584 [2024-11-19 00:01:18.026888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:11.584 [2024-11-19 00:01:18.026896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:11.584 [2024-11-19 00:01:18.026904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:11.584 [2024-11-19 00:01:18.026912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:11.584 [2024-11-19 00:01:18.026922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.026934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:11.584 [2024-11-19 00:01:18.026945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:17:11.584 [2024-11-19 00:01:18.026955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.064983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.065033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:11.584 [2024-11-19 00:01:18.065046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.969 ms 00:17:11.584 [2024-11-19 00:01:18.065055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.065216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.065234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:11.584 [2024-11-19 00:01:18.065245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:11.584 [2024-11-19 00:01:18.065253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.115677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.115729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:11.584 [2024-11-19 00:01:18.115744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.400 ms 00:17:11.584 [2024-11-19 00:01:18.115757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.115874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.115887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.584 [2024-11-19 00:01:18.115899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:11.584 [2024-11-19 00:01:18.115908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.116627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.116669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.584 [2024-11-19 00:01:18.116681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:17:11.584 [2024-11-19 00:01:18.116699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.116871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.116884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.584 [2024-11-19 00:01:18.116893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:17:11.584 [2024-11-19 00:01:18.116902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.131917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.131947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.584 [2024-11-19 00:01:18.131956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.992 ms 00:17:11.584 [2024-11-19 00:01:18.131964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.144995] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:11.584 [2024-11-19 00:01:18.145029] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:11.584 [2024-11-19 00:01:18.145041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.145049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:11.584 [2024-11-19 00:01:18.145058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.981 ms 00:17:11.584 [2024-11-19 00:01:18.145065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.169705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.169738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:11.584 [2024-11-19 00:01:18.169756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.558 ms 00:17:11.584 [2024-11-19 00:01:18.169764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.181735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.181900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:11.584 [2024-11-19 00:01:18.181917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.903 ms 00:17:11.584 [2024-11-19 00:01:18.181925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.193532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.193561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:11.584 [2024-11-19 00:01:18.193572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.544 ms 00:17:11.584 [2024-11-19 00:01:18.193580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.194251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.194273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:11.584 [2024-11-19 00:01:18.194283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:17:11.584 [2024-11-19 00:01:18.194291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.252846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.584 [2024-11-19 00:01:18.253022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:11.584 [2024-11-19 00:01:18.253041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.530 ms 00:17:11.584 [2024-11-19 00:01:18.253050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.584 [2024-11-19 00:01:18.264383] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:11.847 [2024-11-19 00:01:18.281830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.847 [2024-11-19 00:01:18.281866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:11.847 [2024-11-19 00:01:18.281877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.662 ms 00:17:11.847 [2024-11-19 00:01:18.281886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.847 [2024-11-19 00:01:18.281975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.847 [2024-11-19 00:01:18.281989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:11.847 [2024-11-19 00:01:18.281998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:11.847 [2024-11-19 00:01:18.282007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.847 [2024-11-19 00:01:18.282061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.847 [2024-11-19 00:01:18.282072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:11.847 [2024-11-19 00:01:18.282080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:11.847 [2024-11-19 00:01:18.282088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.847 [2024-11-19 00:01:18.282114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.847 [2024-11-19 00:01:18.282146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:11.847 [2024-11-19 00:01:18.282159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:11.847 [2024-11-19 00:01:18.282166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.847 [2024-11-19 00:01:18.282204] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:11.847 [2024-11-19 00:01:18.282214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.847 [2024-11-19 00:01:18.282221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:11.847 [2024-11-19 00:01:18.282230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:11.847 [2024-11-19 00:01:18.282238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.847 [2024-11-19 00:01:18.306700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.847 [2024-11-19 00:01:18.306739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:11.847 [2024-11-19 00:01:18.306751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.442 ms 00:17:11.847 [2024-11-19 00:01:18.306758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.847 [2024-11-19 00:01:18.306852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.847 [2024-11-19 00:01:18.306863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:11.847 [2024-11-19 00:01:18.306872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:11.847 [2024-11-19 00:01:18.306880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.847 [2024-11-19 00:01:18.307973] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:11.847 [2024-11-19 00:01:18.311177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.629 ms, result 0 00:17:11.847 [2024-11-19 00:01:18.312388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:11.847 [2024-11-19 00:01:18.325306] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:12.787  [2024-11-19T00:01:20.416Z] Copying: 20/256 [MB] (20 MBps) [2024-11-19T00:01:21.361Z] Copying: 41/256 [MB] (21 MBps) [2024-11-19T00:01:22.747Z] Copying: 55/256 [MB] (13 MBps) [2024-11-19T00:01:23.690Z] Copying: 66/256 [MB] (11 MBps) [2024-11-19T00:01:24.630Z] Copying: 84/256 [MB] (18 MBps) [2024-11-19T00:01:25.568Z] Copying: 106/256 [MB] (22 MBps) [2024-11-19T00:01:26.565Z] Copying: 125/256 [MB] (19 MBps) [2024-11-19T00:01:27.508Z] Copying: 141/256 [MB] (15 MBps) [2024-11-19T00:01:28.453Z] Copying: 154/256 [MB] (13 MBps) [2024-11-19T00:01:29.399Z] Copying: 167/256 [MB] (12 MBps) [2024-11-19T00:01:30.342Z] Copying: 178/256 [MB] (11 MBps) [2024-11-19T00:01:31.724Z] Copying: 206/256 [MB] (27 MBps) [2024-11-19T00:01:32.663Z] Copying: 227/256 [MB] (21 MBps) [2024-11-19T00:01:33.235Z] Copying: 244/256 [MB] (16 MBps) [2024-11-19T00:01:33.235Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-19 00:01:33.015133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.543 [2024-11-19 00:01:33.024632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.543 [2024-11-19 00:01:33.024747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:26.543 [2024-11-19 00:01:33.024804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.543 [2024-11-19 00:01:33.024829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.543 [2024-11-19 00:01:33.024865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:26.543 [2024-11-19 00:01:33.027521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.543 [2024-11-19 00:01:33.027624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:26.544 [2024-11-19 00:01:33.027676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.619 ms 00:17:26.544 [2024-11-19 00:01:33.027699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.030276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.030376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:26.544 [2024-11-19 00:01:33.030427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.542 ms 00:17:26.544 [2024-11-19 00:01:33.030449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.037812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.037925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:26.544 [2024-11-19 00:01:33.037986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.333 ms 00:17:26.544 [2024-11-19 00:01:33.038010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.044994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.045095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:26.544 [2024-11-19 00:01:33.045160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.932 ms 00:17:26.544 [2024-11-19 00:01:33.045186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.069267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.069375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:26.544 [2024-11-19 00:01:33.069425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.026 ms 00:17:26.544 [2024-11-19 00:01:33.069446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.083387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.083514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:26.544 [2024-11-19 00:01:33.083573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.844 ms 00:17:26.544 [2024-11-19 00:01:33.083598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.083766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.083833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:26.544 [2024-11-19 00:01:33.083856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:26.544 [2024-11-19 00:01:33.083875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.107561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.107682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:26.544 [2024-11-19 00:01:33.107733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.634 ms 00:17:26.544 [2024-11-19 00:01:33.107743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.131746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.131784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:26.544 [2024-11-19 00:01:33.131796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.971 ms 00:17:26.544 [2024-11-19 00:01:33.131802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.155320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.155361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:26.544 [2024-11-19 00:01:33.155372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.477 ms 00:17:26.544 [2024-11-19 00:01:33.155379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.179664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.544 [2024-11-19 00:01:33.179706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:26.544 [2024-11-19 00:01:33.179717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.204 ms 00:17:26.544 [2024-11-19 00:01:33.179723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.544 [2024-11-19 00:01:33.179770] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:26.544 [2024-11-19 00:01:33.179792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.179998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:26.544 [2024-11-19 00:01:33.180194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:26.545 [2024-11-19 00:01:33.180611] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:26.545 [2024-11-19 00:01:33.180620] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00376b2c-fdd4-4b48-80c2-de968aa5cce4 00:17:26.545 [2024-11-19 00:01:33.180629] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:26.545 [2024-11-19 00:01:33.180636] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:26.545 [2024-11-19 00:01:33.180644] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:26.545 [2024-11-19 00:01:33.180652] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:26.545 [2024-11-19 00:01:33.180659] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:26.545 [2024-11-19 00:01:33.180668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:26.545 [2024-11-19 00:01:33.180676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:26.545 [2024-11-19 00:01:33.180683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:26.545 [2024-11-19 00:01:33.180689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:26.545 [2024-11-19 00:01:33.180696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.545 [2024-11-19 00:01:33.180704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:26.545 [2024-11-19 00:01:33.180715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:17:26.545 [2024-11-19 00:01:33.180722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.545 [2024-11-19 00:01:33.194437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.545 [2024-11-19 00:01:33.194596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:26.545 [2024-11-19 00:01:33.194613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.682 ms 00:17:26.545 [2024-11-19 00:01:33.194621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.545 [2024-11-19 00:01:33.195015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.545 [2024-11-19 00:01:33.195032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:26.545 [2024-11-19 00:01:33.195042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:17:26.545 [2024-11-19 00:01:33.195049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.233511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.233674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.807 [2024-11-19 00:01:33.233693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.233702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.233785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.233798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.807 [2024-11-19 00:01:33.233807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.233814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.233865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.233876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.807 [2024-11-19 00:01:33.233884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.233892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.233909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.233918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.807 [2024-11-19 00:01:33.233929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.233936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.319469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.319524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.807 [2024-11-19 00:01:33.319537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.319546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.389971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.390027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.807 [2024-11-19 00:01:33.390048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.390057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.390156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.390167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.807 [2024-11-19 00:01:33.390176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.390186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.390218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.390228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.807 [2024-11-19 00:01:33.390236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.390248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.390346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.390357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.807 [2024-11-19 00:01:33.390367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.390376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.390410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.390420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:26.807 [2024-11-19 00:01:33.390429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.390438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.390486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.390497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.807 [2024-11-19 00:01:33.390505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.390513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.390562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.807 [2024-11-19 00:01:33.390573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.807 [2024-11-19 00:01:33.390582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.807 [2024-11-19 00:01:33.390593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.807 [2024-11-19 00:01:33.390751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.097 ms, result 0 00:17:27.751 00:17:27.751 00:17:27.751 00:01:34 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73996 00:17:27.751 00:01:34 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73996 00:17:27.751 00:01:34 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:27.751 00:01:34 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73996 ']' 00:17:27.751 00:01:34 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.751 00:01:34 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.751 00:01:34 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.751 00:01:34 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.751 00:01:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:28.013 [2024-11-19 00:01:34.492557] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:28.013 [2024-11-19 00:01:34.492707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73996 ] 00:17:28.013 [2024-11-19 00:01:34.654119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.274 [2024-11-19 00:01:34.783015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.847 00:01:35 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.847 00:01:35 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:28.847 00:01:35 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:29.108 [2024-11-19 00:01:35.686723] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.109 [2024-11-19 00:01:35.687026] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.373 [2024-11-19 00:01:35.865525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.865586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:29.373 [2024-11-19 00:01:35.865605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:29.373 [2024-11-19 00:01:35.865613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.868647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.868838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.373 [2024-11-19 00:01:35.868863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:17:29.373 [2024-11-19 00:01:35.868872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.869469] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:29.373 [2024-11-19 00:01:35.870257] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:29.373 [2024-11-19 00:01:35.870298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.870307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.373 [2024-11-19 00:01:35.870321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:17:29.373 [2024-11-19 00:01:35.870329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.872541] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:29.373 [2024-11-19 00:01:35.886716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.886773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:29.373 [2024-11-19 00:01:35.886788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.183 ms 00:17:29.373 [2024-11-19 00:01:35.886799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.886912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.886927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:29.373 [2024-11-19 00:01:35.886937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:29.373 [2024-11-19 00:01:35.886948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.895379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.895431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.373 [2024-11-19 00:01:35.895454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.377 ms 00:17:29.373 [2024-11-19 00:01:35.895464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.895580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.895593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.373 [2024-11-19 00:01:35.895603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:29.373 [2024-11-19 00:01:35.895613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.895646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.895657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:29.373 [2024-11-19 00:01:35.895665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:29.373 [2024-11-19 00:01:35.895675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.895700] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:29.373 [2024-11-19 00:01:35.899624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.899666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.373 [2024-11-19 00:01:35.899680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.928 ms 00:17:29.373 [2024-11-19 00:01:35.899690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.899770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.899781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:29.373 [2024-11-19 00:01:35.899793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:29.373 [2024-11-19 00:01:35.899806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.899830] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:29.373 [2024-11-19 00:01:35.899853] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:29.373 [2024-11-19 00:01:35.899902] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:29.373 [2024-11-19 00:01:35.899920] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:29.373 [2024-11-19 00:01:35.900031] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:29.373 [2024-11-19 00:01:35.900044] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:29.373 [2024-11-19 00:01:35.900061] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:29.373 [2024-11-19 00:01:35.900075] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:29.373 [2024-11-19 00:01:35.900088] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:29.373 [2024-11-19 00:01:35.900098] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:29.373 [2024-11-19 00:01:35.900109] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:29.373 [2024-11-19 00:01:35.900118] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:29.373 [2024-11-19 00:01:35.900190] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:29.373 [2024-11-19 00:01:35.900200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.900211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:29.373 [2024-11-19 00:01:35.900221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:17:29.373 [2024-11-19 00:01:35.900232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.900324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.373 [2024-11-19 00:01:35.900337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:29.373 [2024-11-19 00:01:35.900346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:29.373 [2024-11-19 00:01:35.900356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.373 [2024-11-19 00:01:35.900459] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:29.373 [2024-11-19 00:01:35.900474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:29.373 [2024-11-19 00:01:35.900483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.373 [2024-11-19 00:01:35.900495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:29.373 [2024-11-19 00:01:35.900514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:29.373 [2024-11-19 00:01:35.900536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:29.373 [2024-11-19 00:01:35.900545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.373 [2024-11-19 00:01:35.900562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:29.373 [2024-11-19 00:01:35.900570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:29.373 [2024-11-19 00:01:35.900577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.373 [2024-11-19 00:01:35.900585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:29.373 [2024-11-19 00:01:35.900592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:29.373 [2024-11-19 00:01:35.900600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:29.373 [2024-11-19 00:01:35.900617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:29.373 [2024-11-19 00:01:35.900625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:29.373 [2024-11-19 00:01:35.900649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.373 [2024-11-19 00:01:35.900664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:29.373 [2024-11-19 00:01:35.900675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.373 [2024-11-19 00:01:35.900690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:29.373 [2024-11-19 00:01:35.900698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.373 [2024-11-19 00:01:35.900712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:29.373 [2024-11-19 00:01:35.900720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.373 [2024-11-19 00:01:35.900735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:29.373 [2024-11-19 00:01:35.900743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:29.373 [2024-11-19 00:01:35.900754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.373 [2024-11-19 00:01:35.900761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:29.374 [2024-11-19 00:01:35.900769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:29.374 [2024-11-19 00:01:35.900775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.374 [2024-11-19 00:01:35.900784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:29.374 [2024-11-19 00:01:35.900791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:29.374 [2024-11-19 00:01:35.900801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.374 [2024-11-19 00:01:35.900807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:29.374 [2024-11-19 00:01:35.900816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:29.374 [2024-11-19 00:01:35.900822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.374 [2024-11-19 00:01:35.900830] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:29.374 [2024-11-19 00:01:35.900838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:29.374 [2024-11-19 00:01:35.900850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.374 [2024-11-19 00:01:35.900857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.374 [2024-11-19 00:01:35.900867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:29.374 [2024-11-19 00:01:35.900873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:29.374 [2024-11-19 00:01:35.900882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:29.374 [2024-11-19 00:01:35.900892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:29.374 [2024-11-19 00:01:35.900901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:29.374 [2024-11-19 00:01:35.900907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:29.374 [2024-11-19 00:01:35.900918] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:29.374 [2024-11-19 00:01:35.900928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.374 [2024-11-19 00:01:35.900941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:29.374 [2024-11-19 00:01:35.900948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:29.374 [2024-11-19 00:01:35.900959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:29.374 [2024-11-19 00:01:35.900966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:29.374 [2024-11-19 00:01:35.900975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:29.374 [2024-11-19 00:01:35.900982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:29.374 [2024-11-19 00:01:35.900991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:29.374 [2024-11-19 00:01:35.900998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:29.374 [2024-11-19 00:01:35.901007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:29.374 [2024-11-19 00:01:35.901015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:29.374 [2024-11-19 00:01:35.901024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:29.374 [2024-11-19 00:01:35.901032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:29.374 [2024-11-19 00:01:35.901040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:29.374 [2024-11-19 00:01:35.901048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:29.374 [2024-11-19 00:01:35.901056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:29.374 [2024-11-19 00:01:35.901064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.374 [2024-11-19 00:01:35.901081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:29.374 [2024-11-19 00:01:35.901089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:29.374 [2024-11-19 00:01:35.901098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:29.374 [2024-11-19 00:01:35.901105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:29.374 [2024-11-19 00:01:35.901114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:35.901137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:29.374 [2024-11-19 00:01:35.901146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:17:29.374 [2024-11-19 00:01:35.901154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:35.933712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:35.933761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.374 [2024-11-19 00:01:35.933775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.492 ms 00:17:29.374 [2024-11-19 00:01:35.933783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:35.933921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:35.933932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:29.374 [2024-11-19 00:01:35.933942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:29.374 [2024-11-19 00:01:35.933950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:35.968940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:35.968985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.374 [2024-11-19 00:01:35.969002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.962 ms 00:17:29.374 [2024-11-19 00:01:35.969010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:35.969103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:35.969114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.374 [2024-11-19 00:01:35.969150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.374 [2024-11-19 00:01:35.969159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:35.969665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:35.969705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.374 [2024-11-19 00:01:35.969721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:17:29.374 [2024-11-19 00:01:35.969729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:35.969884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:35.969899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.374 [2024-11-19 00:01:35.969910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:17:29.374 [2024-11-19 00:01:35.969917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:35.988200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:35.988243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.374 [2024-11-19 00:01:35.988256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.256 ms 00:17:29.374 [2024-11-19 00:01:35.988265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:36.002611] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:29.374 [2024-11-19 00:01:36.002657] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:29.374 [2024-11-19 00:01:36.002673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:36.002681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:29.374 [2024-11-19 00:01:36.002694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.289 ms 00:17:29.374 [2024-11-19 00:01:36.002701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:36.028520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:36.028567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:29.374 [2024-11-19 00:01:36.028583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.721 ms 00:17:29.374 [2024-11-19 00:01:36.028591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:36.041641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:36.041813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:29.374 [2024-11-19 00:01:36.041841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.952 ms 00:17:29.374 [2024-11-19 00:01:36.041849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:36.054361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:36.054402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:29.374 [2024-11-19 00:01:36.054417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.429 ms 00:17:29.374 [2024-11-19 00:01:36.054425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.374 [2024-11-19 00:01:36.055073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.374 [2024-11-19 00:01:36.055106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.374 [2024-11-19 00:01:36.055119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:17:29.374 [2024-11-19 00:01:36.055149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.637 [2024-11-19 00:01:36.130012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.637 [2024-11-19 00:01:36.130210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:29.637 [2024-11-19 00:01:36.130233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.833 ms 00:17:29.637 [2024-11-19 00:01:36.130243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.637 [2024-11-19 00:01:36.140977] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:29.637 [2024-11-19 00:01:36.154943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.637 [2024-11-19 00:01:36.154986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:29.637 [2024-11-19 00:01:36.154998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.334 ms 00:17:29.637 [2024-11-19 00:01:36.155008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.637 [2024-11-19 00:01:36.155074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.637 [2024-11-19 00:01:36.155087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:29.637 [2024-11-19 00:01:36.155095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:29.637 [2024-11-19 00:01:36.155104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.637 [2024-11-19 00:01:36.155168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.637 [2024-11-19 00:01:36.155180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.637 [2024-11-19 00:01:36.155189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:29.637 [2024-11-19 00:01:36.155202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.637 [2024-11-19 00:01:36.155226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.637 [2024-11-19 00:01:36.155235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.637 [2024-11-19 00:01:36.155243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:29.637 [2024-11-19 00:01:36.155252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.637 [2024-11-19 00:01:36.155281] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:29.637 [2024-11-19 00:01:36.155294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.637 [2024-11-19 00:01:36.155304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:29.637 [2024-11-19 00:01:36.155314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:29.637 [2024-11-19 00:01:36.155321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.637 [2024-11-19 00:01:36.178733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.637 [2024-11-19 00:01:36.178767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.637 [2024-11-19 00:01:36.178780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.386 ms 00:17:29.637 [2024-11-19 00:01:36.178788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.637 [2024-11-19 00:01:36.178880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.637 [2024-11-19 00:01:36.178890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.637 [2024-11-19 00:01:36.178904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:29.637 [2024-11-19 00:01:36.178911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.637 [2024-11-19 00:01:36.179731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.637 [2024-11-19 00:01:36.182769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.930 ms, result 0 00:17:29.637 [2024-11-19 00:01:36.184755] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.637 Some configs were skipped because the RPC state that can call them passed over. 00:17:29.637 00:01:36 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:29.899 [2024-11-19 00:01:36.420739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.899 [2024-11-19 00:01:36.420929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:29.899 [2024-11-19 00:01:36.420996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.686 ms 00:17:29.899 [2024-11-19 00:01:36.421023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.899 [2024-11-19 00:01:36.421080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.028 ms, result 0 00:17:29.899 true 00:17:29.899 00:01:36 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:30.161 [2024-11-19 00:01:36.636999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.161 [2024-11-19 00:01:36.637188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:30.161 [2024-11-19 00:01:36.637379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.663 ms 00:17:30.161 [2024-11-19 00:01:36.637420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.161 [2024-11-19 00:01:36.637489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.152 ms, result 0 00:17:30.161 true 00:17:30.161 00:01:36 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73996 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73996 ']' 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73996 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73996 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73996' 00:17:30.161 killing process with pid 73996 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73996 00:17:30.161 00:01:36 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73996 00:17:30.734 [2024-11-19 00:01:37.340408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.340456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:30.734 [2024-11-19 00:01:37.340466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:30.734 [2024-11-19 00:01:37.340474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.340493] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:30.734 [2024-11-19 00:01:37.342589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.342614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:30.734 [2024-11-19 00:01:37.342624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:17:30.734 [2024-11-19 00:01:37.342631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.342863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.342871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:30.734 [2024-11-19 00:01:37.342879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:17:30.734 [2024-11-19 00:01:37.342885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.346103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.346131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:30.734 [2024-11-19 00:01:37.346142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:17:30.734 [2024-11-19 00:01:37.346148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.351355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.351471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:30.734 [2024-11-19 00:01:37.351486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.179 ms 00:17:30.734 [2024-11-19 00:01:37.351492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.358920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.359016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:30.734 [2024-11-19 00:01:37.359032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.370 ms 00:17:30.734 [2024-11-19 00:01:37.359042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.365611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.365712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:30.734 [2024-11-19 00:01:37.365726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.539 ms 00:17:30.734 [2024-11-19 00:01:37.365732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.365840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.365848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:30.734 [2024-11-19 00:01:37.365856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:30.734 [2024-11-19 00:01:37.365861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.373833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.373859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:30.734 [2024-11-19 00:01:37.373868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.955 ms 00:17:30.734 [2024-11-19 00:01:37.373873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.381335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.381359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:30.734 [2024-11-19 00:01:37.381369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.432 ms 00:17:30.734 [2024-11-19 00:01:37.381374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.388204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.388291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:30.734 [2024-11-19 00:01:37.388306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.801 ms 00:17:30.734 [2024-11-19 00:01:37.388312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.395139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.734 [2024-11-19 00:01:37.395228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:30.734 [2024-11-19 00:01:37.395242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.779 ms 00:17:30.734 [2024-11-19 00:01:37.395247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.734 [2024-11-19 00:01:37.395272] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:30.734 [2024-11-19 00:01:37.395282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:30.734 [2024-11-19 00:01:37.395491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:30.735 [2024-11-19 00:01:37.395934] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:30.735 [2024-11-19 00:01:37.395942] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00376b2c-fdd4-4b48-80c2-de968aa5cce4 00:17:30.735 [2024-11-19 00:01:37.395954] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:30.735 [2024-11-19 00:01:37.395960] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:30.735 [2024-11-19 00:01:37.395966] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:30.735 [2024-11-19 00:01:37.395973] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:30.735 [2024-11-19 00:01:37.395979] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:30.735 [2024-11-19 00:01:37.395986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:30.735 [2024-11-19 00:01:37.395991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:30.735 [2024-11-19 00:01:37.395997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:30.735 [2024-11-19 00:01:37.396002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:30.735 [2024-11-19 00:01:37.396008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.735 [2024-11-19 00:01:37.396014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:30.735 [2024-11-19 00:01:37.396022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:17:30.735 [2024-11-19 00:01:37.396028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.735 [2024-11-19 00:01:37.405629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.735 [2024-11-19 00:01:37.405651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:30.735 [2024-11-19 00:01:37.405662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.584 ms 00:17:30.735 [2024-11-19 00:01:37.405668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.735 [2024-11-19 00:01:37.405950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.735 [2024-11-19 00:01:37.405958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:30.735 [2024-11-19 00:01:37.405967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:30.735 [2024-11-19 00:01:37.405973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.440683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.440708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.997 [2024-11-19 00:01:37.440717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.440724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.440798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.440805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.997 [2024-11-19 00:01:37.440814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.440820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.440856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.440863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.997 [2024-11-19 00:01:37.440872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.440877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.440892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.440898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.997 [2024-11-19 00:01:37.440905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.440912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.499427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.499463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.997 [2024-11-19 00:01:37.499474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.499480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.548709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.548741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.997 [2024-11-19 00:01:37.548751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.548759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.548816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.548823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.997 [2024-11-19 00:01:37.548832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.548838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.548861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.548867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.997 [2024-11-19 00:01:37.548874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.548880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.548949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.548957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.997 [2024-11-19 00:01:37.548964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.548969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.548994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.549001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:30.997 [2024-11-19 00:01:37.549009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.549015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.549045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.549052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.997 [2024-11-19 00:01:37.549061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.549067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.549102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.997 [2024-11-19 00:01:37.549109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.997 [2024-11-19 00:01:37.549117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.997 [2024-11-19 00:01:37.549142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.997 [2024-11-19 00:01:37.549245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 208.821 ms, result 0 00:17:31.571 00:01:38 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:31.571 00:01:38 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:31.571 [2024-11-19 00:01:38.113489] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:31.571 [2024-11-19 00:01:38.113607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74044 ] 00:17:31.832 [2024-11-19 00:01:38.268652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.832 [2024-11-19 00:01:38.345287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.094 [2024-11-19 00:01:38.549544] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.094 [2024-11-19 00:01:38.549592] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.095 [2024-11-19 00:01:38.701301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.701336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:32.095 [2024-11-19 00:01:38.701346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:32.095 [2024-11-19 00:01:38.701353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.703413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.703562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.095 [2024-11-19 00:01:38.703575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:17:32.095 [2024-11-19 00:01:38.703581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.703636] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:32.095 [2024-11-19 00:01:38.704199] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:32.095 [2024-11-19 00:01:38.704218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.704224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.095 [2024-11-19 00:01:38.704231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:17:32.095 [2024-11-19 00:01:38.704236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.705181] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:32.095 [2024-11-19 00:01:38.714603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.714715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:32.095 [2024-11-19 00:01:38.714728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.423 ms 00:17:32.095 [2024-11-19 00:01:38.714734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.714798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.714807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:32.095 [2024-11-19 00:01:38.714813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:32.095 [2024-11-19 00:01:38.714818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.719102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.719143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.095 [2024-11-19 00:01:38.719151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.255 ms 00:17:32.095 [2024-11-19 00:01:38.719156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.719227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.719234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.095 [2024-11-19 00:01:38.719241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:32.095 [2024-11-19 00:01:38.719247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.719263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.719271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:32.095 [2024-11-19 00:01:38.719277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:32.095 [2024-11-19 00:01:38.719282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.719299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:32.095 [2024-11-19 00:01:38.721857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.721956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.095 [2024-11-19 00:01:38.721968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:17:32.095 [2024-11-19 00:01:38.721973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.722002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.722008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:32.095 [2024-11-19 00:01:38.722015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:32.095 [2024-11-19 00:01:38.722020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.722032] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:32.095 [2024-11-19 00:01:38.722050] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:32.095 [2024-11-19 00:01:38.722076] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:32.095 [2024-11-19 00:01:38.722087] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:32.095 [2024-11-19 00:01:38.722175] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:32.095 [2024-11-19 00:01:38.722183] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:32.095 [2024-11-19 00:01:38.722191] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:32.095 [2024-11-19 00:01:38.722199] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:32.095 [2024-11-19 00:01:38.722208] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:32.095 [2024-11-19 00:01:38.722214] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:32.095 [2024-11-19 00:01:38.722220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:32.095 [2024-11-19 00:01:38.722225] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:32.095 [2024-11-19 00:01:38.722231] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:32.095 [2024-11-19 00:01:38.722236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.722242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:32.095 [2024-11-19 00:01:38.722248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:17:32.095 [2024-11-19 00:01:38.722253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.722319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.095 [2024-11-19 00:01:38.722325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:32.095 [2024-11-19 00:01:38.722333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:32.095 [2024-11-19 00:01:38.722339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.095 [2024-11-19 00:01:38.722413] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:32.095 [2024-11-19 00:01:38.722420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:32.095 [2024-11-19 00:01:38.722426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.095 [2024-11-19 00:01:38.722432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:32.095 [2024-11-19 00:01:38.722443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:32.095 [2024-11-19 00:01:38.722453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:32.095 [2024-11-19 00:01:38.722459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.095 [2024-11-19 00:01:38.722469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:32.095 [2024-11-19 00:01:38.722475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:32.095 [2024-11-19 00:01:38.722480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.095 [2024-11-19 00:01:38.722490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:32.095 [2024-11-19 00:01:38.722495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:32.095 [2024-11-19 00:01:38.722500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:32.095 [2024-11-19 00:01:38.722511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:32.095 [2024-11-19 00:01:38.722516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:32.095 [2024-11-19 00:01:38.722526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.095 [2024-11-19 00:01:38.722536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:32.095 [2024-11-19 00:01:38.722541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.095 [2024-11-19 00:01:38.722551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:32.095 [2024-11-19 00:01:38.722556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.095 [2024-11-19 00:01:38.722566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:32.095 [2024-11-19 00:01:38.722571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.095 [2024-11-19 00:01:38.722581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:32.095 [2024-11-19 00:01:38.722586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:32.095 [2024-11-19 00:01:38.722591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.095 [2024-11-19 00:01:38.722596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:32.095 [2024-11-19 00:01:38.722601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:32.095 [2024-11-19 00:01:38.722606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.095 [2024-11-19 00:01:38.722611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:32.095 [2024-11-19 00:01:38.722616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:32.096 [2024-11-19 00:01:38.722621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.096 [2024-11-19 00:01:38.722626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:32.096 [2024-11-19 00:01:38.722631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:32.096 [2024-11-19 00:01:38.722636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.096 [2024-11-19 00:01:38.722641] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:32.096 [2024-11-19 00:01:38.722648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:32.096 [2024-11-19 00:01:38.722653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.096 [2024-11-19 00:01:38.722660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.096 [2024-11-19 00:01:38.722665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:32.096 [2024-11-19 00:01:38.722671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:32.096 [2024-11-19 00:01:38.722676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:32.096 [2024-11-19 00:01:38.722681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:32.096 [2024-11-19 00:01:38.722686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:32.096 [2024-11-19 00:01:38.722691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:32.096 [2024-11-19 00:01:38.722697] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:32.096 [2024-11-19 00:01:38.722704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.096 [2024-11-19 00:01:38.722710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:32.096 [2024-11-19 00:01:38.722715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:32.096 [2024-11-19 00:01:38.722721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:32.096 [2024-11-19 00:01:38.722726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:32.096 [2024-11-19 00:01:38.722732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:32.096 [2024-11-19 00:01:38.722737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:32.096 [2024-11-19 00:01:38.722742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:32.096 [2024-11-19 00:01:38.722748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:32.096 [2024-11-19 00:01:38.722754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:32.096 [2024-11-19 00:01:38.722759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:32.096 [2024-11-19 00:01:38.722764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:32.096 [2024-11-19 00:01:38.722769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:32.096 [2024-11-19 00:01:38.722775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:32.096 [2024-11-19 00:01:38.722781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:32.096 [2024-11-19 00:01:38.722787] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:32.096 [2024-11-19 00:01:38.722792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.096 [2024-11-19 00:01:38.722799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:32.096 [2024-11-19 00:01:38.722804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:32.096 [2024-11-19 00:01:38.722810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:32.096 [2024-11-19 00:01:38.722815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:32.096 [2024-11-19 00:01:38.722820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.096 [2024-11-19 00:01:38.722826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:32.096 [2024-11-19 00:01:38.722834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:17:32.096 [2024-11-19 00:01:38.722839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.096 [2024-11-19 00:01:38.743417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.096 [2024-11-19 00:01:38.743449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.096 [2024-11-19 00:01:38.743457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.541 ms 00:17:32.096 [2024-11-19 00:01:38.743463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.096 [2024-11-19 00:01:38.743553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.096 [2024-11-19 00:01:38.743563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:32.096 [2024-11-19 00:01:38.743570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:32.096 [2024-11-19 00:01:38.743576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.785033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.785158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.358 [2024-11-19 00:01:38.785173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.441 ms 00:17:32.358 [2024-11-19 00:01:38.785182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.785241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.785250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.358 [2024-11-19 00:01:38.785257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:32.358 [2024-11-19 00:01:38.785262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.785534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.785545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.358 [2024-11-19 00:01:38.785552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:32.358 [2024-11-19 00:01:38.785558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.785666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.785680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.358 [2024-11-19 00:01:38.785687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:32.358 [2024-11-19 00:01:38.785692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.796420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.796511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.358 [2024-11-19 00:01:38.796523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.712 ms 00:17:32.358 [2024-11-19 00:01:38.796529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.806331] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:32.358 [2024-11-19 00:01:38.806357] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:32.358 [2024-11-19 00:01:38.806366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.806373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:32.358 [2024-11-19 00:01:38.806379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.753 ms 00:17:32.358 [2024-11-19 00:01:38.806384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.824746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.824779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:32.358 [2024-11-19 00:01:38.824788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.317 ms 00:17:32.358 [2024-11-19 00:01:38.824794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.833588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.833613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:32.358 [2024-11-19 00:01:38.833621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.741 ms 00:17:32.358 [2024-11-19 00:01:38.833626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.842014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.842038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:32.358 [2024-11-19 00:01:38.842045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.347 ms 00:17:32.358 [2024-11-19 00:01:38.842051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.842516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.842538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:32.358 [2024-11-19 00:01:38.842545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:17:32.358 [2024-11-19 00:01:38.842550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.358 [2024-11-19 00:01:38.886636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.358 [2024-11-19 00:01:38.886669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:32.358 [2024-11-19 00:01:38.886679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.068 ms 00:17:32.359 [2024-11-19 00:01:38.886685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.359 [2024-11-19 00:01:38.895002] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:32.359 [2024-11-19 00:01:38.906292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.359 [2024-11-19 00:01:38.906319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:32.359 [2024-11-19 00:01:38.906328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.551 ms 00:17:32.359 [2024-11-19 00:01:38.906335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.359 [2024-11-19 00:01:38.906404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.359 [2024-11-19 00:01:38.906412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:32.359 [2024-11-19 00:01:38.906419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:32.359 [2024-11-19 00:01:38.906425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.359 [2024-11-19 00:01:38.906459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.359 [2024-11-19 00:01:38.906465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.359 [2024-11-19 00:01:38.906471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:32.359 [2024-11-19 00:01:38.906478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.359 [2024-11-19 00:01:38.906498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.359 [2024-11-19 00:01:38.906506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.359 [2024-11-19 00:01:38.906513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:32.359 [2024-11-19 00:01:38.906518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.359 [2024-11-19 00:01:38.906540] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:32.359 [2024-11-19 00:01:38.906548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.359 [2024-11-19 00:01:38.906553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:32.359 [2024-11-19 00:01:38.906559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:32.359 [2024-11-19 00:01:38.906565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.359 [2024-11-19 00:01:38.924138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.359 [2024-11-19 00:01:38.924164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.359 [2024-11-19 00:01:38.924173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.558 ms 00:17:32.359 [2024-11-19 00:01:38.924180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.359 [2024-11-19 00:01:38.924249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.359 [2024-11-19 00:01:38.924258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.359 [2024-11-19 00:01:38.924265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:32.359 [2024-11-19 00:01:38.924271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.359 [2024-11-19 00:01:38.924880] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.359 [2024-11-19 00:01:38.927149] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.353 ms, result 0 00:17:32.359 [2024-11-19 00:01:38.927808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.359 [2024-11-19 00:01:38.942372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:33.314  [2024-11-19T00:01:40.950Z] Copying: 27/256 [MB] (27 MBps) [2024-11-19T00:01:42.336Z] Copying: 49/256 [MB] (22 MBps) [2024-11-19T00:01:43.281Z] Copying: 63/256 [MB] (14 MBps) [2024-11-19T00:01:44.225Z] Copying: 79/256 [MB] (15 MBps) [2024-11-19T00:01:45.166Z] Copying: 100/256 [MB] (20 MBps) [2024-11-19T00:01:46.110Z] Copying: 119/256 [MB] (19 MBps) [2024-11-19T00:01:47.050Z] Copying: 140/256 [MB] (20 MBps) [2024-11-19T00:01:47.993Z] Copying: 161/256 [MB] (21 MBps) [2024-11-19T00:01:49.383Z] Copying: 179/256 [MB] (17 MBps) [2024-11-19T00:01:49.959Z] Copying: 199/256 [MB] (20 MBps) [2024-11-19T00:01:51.346Z] Copying: 222/256 [MB] (22 MBps) [2024-11-19T00:01:52.299Z] Copying: 243/256 [MB] (20 MBps) [2024-11-19T00:01:52.299Z] Copying: 254/256 [MB] (11 MBps) [2024-11-19T00:01:52.299Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-19 00:01:52.093924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:45.607 [2024-11-19 00:01:52.104217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.104266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:45.607 [2024-11-19 00:01:52.104281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:45.607 [2024-11-19 00:01:52.104297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.104321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:45.607 [2024-11-19 00:01:52.107340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.107540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:45.607 [2024-11-19 00:01:52.107562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:17:45.607 [2024-11-19 00:01:52.107570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.107861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.107873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:45.607 [2024-11-19 00:01:52.107883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:45.607 [2024-11-19 00:01:52.107890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.111603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.111630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:45.607 [2024-11-19 00:01:52.111641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:17:45.607 [2024-11-19 00:01:52.111649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.118754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.118926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:45.607 [2024-11-19 00:01:52.118945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.087 ms 00:17:45.607 [2024-11-19 00:01:52.118953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.144294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.144343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:45.607 [2024-11-19 00:01:52.144356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.275 ms 00:17:45.607 [2024-11-19 00:01:52.144364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.160208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.160262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:45.607 [2024-11-19 00:01:52.160275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.796 ms 00:17:45.607 [2024-11-19 00:01:52.160286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.160433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.160444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:45.607 [2024-11-19 00:01:52.160453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:45.607 [2024-11-19 00:01:52.160461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.185557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.185735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:45.607 [2024-11-19 00:01:52.185755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.069 ms 00:17:45.607 [2024-11-19 00:01:52.185762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.210729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.210774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:45.607 [2024-11-19 00:01:52.210784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.845 ms 00:17:45.607 [2024-11-19 00:01:52.210791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.235103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.235154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:45.607 [2024-11-19 00:01:52.235165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.241 ms 00:17:45.607 [2024-11-19 00:01:52.235172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.259490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.607 [2024-11-19 00:01:52.259531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:45.607 [2024-11-19 00:01:52.259542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.244 ms 00:17:45.607 [2024-11-19 00:01:52.259549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.607 [2024-11-19 00:01:52.259592] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:45.607 [2024-11-19 00:01:52.259607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:45.607 [2024-11-19 00:01:52.259685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.259997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:45.608 [2024-11-19 00:01:52.260244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:45.609 [2024-11-19 00:01:52.260431] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:45.609 [2024-11-19 00:01:52.260439] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00376b2c-fdd4-4b48-80c2-de968aa5cce4 00:17:45.609 [2024-11-19 00:01:52.260448] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:45.609 [2024-11-19 00:01:52.260456] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:45.609 [2024-11-19 00:01:52.260474] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:45.609 [2024-11-19 00:01:52.260483] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:45.609 [2024-11-19 00:01:52.260490] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:45.609 [2024-11-19 00:01:52.260498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:45.609 [2024-11-19 00:01:52.260505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:45.609 [2024-11-19 00:01:52.260512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:45.609 [2024-11-19 00:01:52.260519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:45.609 [2024-11-19 00:01:52.260525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.609 [2024-11-19 00:01:52.260536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:45.609 [2024-11-19 00:01:52.260544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:17:45.609 [2024-11-19 00:01:52.260552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.609 [2024-11-19 00:01:52.273957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.609 [2024-11-19 00:01:52.273995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:45.609 [2024-11-19 00:01:52.274006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.374 ms 00:17:45.609 [2024-11-19 00:01:52.274014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.609 [2024-11-19 00:01:52.274458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.609 [2024-11-19 00:01:52.274476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:45.609 [2024-11-19 00:01:52.274486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:17:45.609 [2024-11-19 00:01:52.274493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.312945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.312993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.922 [2024-11-19 00:01:52.313005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.313013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.313146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.313158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.922 [2024-11-19 00:01:52.313167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.313175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.313223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.313232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.922 [2024-11-19 00:01:52.313240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.313248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.313265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.313276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.922 [2024-11-19 00:01:52.313284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.313291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.396798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.396846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.922 [2024-11-19 00:01:52.396859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.396868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.465975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.466032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.922 [2024-11-19 00:01:52.466045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.466053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.466157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.466167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.922 [2024-11-19 00:01:52.466177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.466186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.466218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.466227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.922 [2024-11-19 00:01:52.466240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.466247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.466346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.466357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.922 [2024-11-19 00:01:52.466366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.466374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.466408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.466418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:45.922 [2024-11-19 00:01:52.466427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.466438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.466485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.466495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.922 [2024-11-19 00:01:52.466504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.466512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.466561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.922 [2024-11-19 00:01:52.466572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.922 [2024-11-19 00:01:52.466584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.922 [2024-11-19 00:01:52.466592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.922 [2024-11-19 00:01:52.466749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.522 ms, result 0 00:17:46.501 00:17:46.501 00:17:46.763 00:01:53 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:46.763 00:01:53 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:47.336 00:01:53 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:47.336 [2024-11-19 00:01:53.856775] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:47.336 [2024-11-19 00:01:53.856927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74219 ] 00:17:47.336 [2024-11-19 00:01:54.021865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.597 [2024-11-19 00:01:54.139062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.858 [2024-11-19 00:01:54.426257] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:47.859 [2024-11-19 00:01:54.426329] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:48.121 [2024-11-19 00:01:54.588022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.588081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:48.121 [2024-11-19 00:01:54.588097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:48.121 [2024-11-19 00:01:54.588105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.591062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.591112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.121 [2024-11-19 00:01:54.591136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.916 ms 00:17:48.121 [2024-11-19 00:01:54.591145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.591258] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:48.121 [2024-11-19 00:01:54.592201] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:48.121 [2024-11-19 00:01:54.592246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.592256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.121 [2024-11-19 00:01:54.592266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:17:48.121 [2024-11-19 00:01:54.592275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.593981] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:48.121 [2024-11-19 00:01:54.607740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.607790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:48.121 [2024-11-19 00:01:54.607802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.762 ms 00:17:48.121 [2024-11-19 00:01:54.607810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.607921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.607934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:48.121 [2024-11-19 00:01:54.607943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:48.121 [2024-11-19 00:01:54.607951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.615865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.615907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.121 [2024-11-19 00:01:54.615917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.869 ms 00:17:48.121 [2024-11-19 00:01:54.615925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.616029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.616040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.121 [2024-11-19 00:01:54.616049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:48.121 [2024-11-19 00:01:54.616056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.616084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.616096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:48.121 [2024-11-19 00:01:54.616104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:48.121 [2024-11-19 00:01:54.616112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.616163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:48.121 [2024-11-19 00:01:54.620092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.620143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.121 [2024-11-19 00:01:54.620154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.935 ms 00:17:48.121 [2024-11-19 00:01:54.620162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.620236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.121 [2024-11-19 00:01:54.620246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:48.121 [2024-11-19 00:01:54.620256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:48.121 [2024-11-19 00:01:54.620264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.121 [2024-11-19 00:01:54.620283] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:48.121 [2024-11-19 00:01:54.620307] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:48.121 [2024-11-19 00:01:54.620345] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:48.121 [2024-11-19 00:01:54.620361] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:48.121 [2024-11-19 00:01:54.620467] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:48.121 [2024-11-19 00:01:54.620479] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:48.122 [2024-11-19 00:01:54.620490] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:48.122 [2024-11-19 00:01:54.620501] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:48.122 [2024-11-19 00:01:54.620512] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:48.122 [2024-11-19 00:01:54.620521] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:48.122 [2024-11-19 00:01:54.620530] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:48.122 [2024-11-19 00:01:54.620537] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:48.122 [2024-11-19 00:01:54.620545] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:48.122 [2024-11-19 00:01:54.620553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.122 [2024-11-19 00:01:54.620562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:48.122 [2024-11-19 00:01:54.620570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:48.122 [2024-11-19 00:01:54.620577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.122 [2024-11-19 00:01:54.620665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.122 [2024-11-19 00:01:54.620675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:48.122 [2024-11-19 00:01:54.620686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:48.122 [2024-11-19 00:01:54.620693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.122 [2024-11-19 00:01:54.620796] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:48.122 [2024-11-19 00:01:54.620816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:48.122 [2024-11-19 00:01:54.620826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.122 [2024-11-19 00:01:54.620834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.122 [2024-11-19 00:01:54.620842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:48.122 [2024-11-19 00:01:54.620851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:48.122 [2024-11-19 00:01:54.620858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:48.122 [2024-11-19 00:01:54.620866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:48.122 [2024-11-19 00:01:54.620874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:48.122 [2024-11-19 00:01:54.620881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.122 [2024-11-19 00:01:54.620887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:48.122 [2024-11-19 00:01:54.620894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:48.122 [2024-11-19 00:01:54.620901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.122 [2024-11-19 00:01:54.620915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:48.122 [2024-11-19 00:01:54.620923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:48.122 [2024-11-19 00:01:54.620929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.122 [2024-11-19 00:01:54.620937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:48.122 [2024-11-19 00:01:54.620944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:48.122 [2024-11-19 00:01:54.620951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.122 [2024-11-19 00:01:54.620958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:48.122 [2024-11-19 00:01:54.620966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:48.122 [2024-11-19 00:01:54.620974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.122 [2024-11-19 00:01:54.620980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:48.122 [2024-11-19 00:01:54.620986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:48.122 [2024-11-19 00:01:54.620993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.122 [2024-11-19 00:01:54.620999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:48.122 [2024-11-19 00:01:54.621006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:48.122 [2024-11-19 00:01:54.621013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.122 [2024-11-19 00:01:54.621020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:48.122 [2024-11-19 00:01:54.621028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:48.122 [2024-11-19 00:01:54.621034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.122 [2024-11-19 00:01:54.621041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:48.122 [2024-11-19 00:01:54.621049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:48.122 [2024-11-19 00:01:54.621055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.122 [2024-11-19 00:01:54.621062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:48.122 [2024-11-19 00:01:54.621068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:48.122 [2024-11-19 00:01:54.621075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.122 [2024-11-19 00:01:54.621082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:48.122 [2024-11-19 00:01:54.621089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:48.122 [2024-11-19 00:01:54.621095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.122 [2024-11-19 00:01:54.621102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:48.122 [2024-11-19 00:01:54.621108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:48.122 [2024-11-19 00:01:54.621115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.122 [2024-11-19 00:01:54.621135] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:48.122 [2024-11-19 00:01:54.621143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:48.122 [2024-11-19 00:01:54.621151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.122 [2024-11-19 00:01:54.621161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.122 [2024-11-19 00:01:54.621169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:48.122 [2024-11-19 00:01:54.621177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:48.122 [2024-11-19 00:01:54.621184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:48.122 [2024-11-19 00:01:54.621191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:48.122 [2024-11-19 00:01:54.621198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:48.122 [2024-11-19 00:01:54.621205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:48.122 [2024-11-19 00:01:54.621215] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:48.122 [2024-11-19 00:01:54.621224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.122 [2024-11-19 00:01:54.621234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:48.122 [2024-11-19 00:01:54.621243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:48.122 [2024-11-19 00:01:54.621250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:48.122 [2024-11-19 00:01:54.621259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:48.122 [2024-11-19 00:01:54.621266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:48.122 [2024-11-19 00:01:54.621273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:48.122 [2024-11-19 00:01:54.621281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:48.122 [2024-11-19 00:01:54.621288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:48.122 [2024-11-19 00:01:54.621296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:48.122 [2024-11-19 00:01:54.621303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:48.122 [2024-11-19 00:01:54.621311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:48.122 [2024-11-19 00:01:54.621318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:48.122 [2024-11-19 00:01:54.621325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:48.122 [2024-11-19 00:01:54.621332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:48.122 [2024-11-19 00:01:54.621340] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:48.122 [2024-11-19 00:01:54.621348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.122 [2024-11-19 00:01:54.621357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:48.122 [2024-11-19 00:01:54.621364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:48.122 [2024-11-19 00:01:54.621372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:48.122 [2024-11-19 00:01:54.621379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:48.122 [2024-11-19 00:01:54.621387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.122 [2024-11-19 00:01:54.621395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:48.122 [2024-11-19 00:01:54.621406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:17:48.122 [2024-11-19 00:01:54.621413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.122 [2024-11-19 00:01:54.652704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.122 [2024-11-19 00:01:54.652753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.122 [2024-11-19 00:01:54.652765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.239 ms 00:17:48.122 [2024-11-19 00:01:54.652773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.652904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.652920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:48.123 [2024-11-19 00:01:54.652928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:48.123 [2024-11-19 00:01:54.652936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.700438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.700646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.123 [2024-11-19 00:01:54.700668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.478 ms 00:17:48.123 [2024-11-19 00:01:54.700682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.700791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.700803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.123 [2024-11-19 00:01:54.700813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.123 [2024-11-19 00:01:54.700821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.701363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.701393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.123 [2024-11-19 00:01:54.701403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:17:48.123 [2024-11-19 00:01:54.701420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.701575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.701593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.123 [2024-11-19 00:01:54.701603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:48.123 [2024-11-19 00:01:54.701612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.717498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.717538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.123 [2024-11-19 00:01:54.717549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.862 ms 00:17:48.123 [2024-11-19 00:01:54.717557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.731834] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:48.123 [2024-11-19 00:01:54.732012] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:48.123 [2024-11-19 00:01:54.732032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.732041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:48.123 [2024-11-19 00:01:54.732051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.365 ms 00:17:48.123 [2024-11-19 00:01:54.732058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.757668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.757728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:48.123 [2024-11-19 00:01:54.757740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.474 ms 00:17:48.123 [2024-11-19 00:01:54.757749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.770404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.770448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:48.123 [2024-11-19 00:01:54.770460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.561 ms 00:17:48.123 [2024-11-19 00:01:54.770468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.782977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.783020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:48.123 [2024-11-19 00:01:54.783032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.425 ms 00:17:48.123 [2024-11-19 00:01:54.783039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.123 [2024-11-19 00:01:54.783743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.123 [2024-11-19 00:01:54.783772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:48.123 [2024-11-19 00:01:54.783783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:17:48.123 [2024-11-19 00:01:54.783791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.385 [2024-11-19 00:01:54.848385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.385 [2024-11-19 00:01:54.848451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:48.385 [2024-11-19 00:01:54.848468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.565 ms 00:17:48.385 [2024-11-19 00:01:54.848477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.385 [2024-11-19 00:01:54.860288] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:48.385 [2024-11-19 00:01:54.879671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.385 [2024-11-19 00:01:54.879725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:48.385 [2024-11-19 00:01:54.879739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.088 ms 00:17:48.385 [2024-11-19 00:01:54.879748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.385 [2024-11-19 00:01:54.879849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.385 [2024-11-19 00:01:54.879861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:48.385 [2024-11-19 00:01:54.879872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:48.385 [2024-11-19 00:01:54.879880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.385 [2024-11-19 00:01:54.879939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.385 [2024-11-19 00:01:54.879949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:48.385 [2024-11-19 00:01:54.879958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:48.385 [2024-11-19 00:01:54.879967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.385 [2024-11-19 00:01:54.879995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.385 [2024-11-19 00:01:54.880006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:48.385 [2024-11-19 00:01:54.880015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:48.385 [2024-11-19 00:01:54.880023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.385 [2024-11-19 00:01:54.880061] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:48.385 [2024-11-19 00:01:54.880072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.385 [2024-11-19 00:01:54.880082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:48.385 [2024-11-19 00:01:54.880091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:48.385 [2024-11-19 00:01:54.880099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.385 [2024-11-19 00:01:54.906048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.385 [2024-11-19 00:01:54.906094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:48.385 [2024-11-19 00:01:54.906108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.926 ms 00:17:48.385 [2024-11-19 00:01:54.906116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.385 [2024-11-19 00:01:54.906266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.385 [2024-11-19 00:01:54.906279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:48.385 [2024-11-19 00:01:54.906289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:48.385 [2024-11-19 00:01:54.906297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.385 [2024-11-19 00:01:54.907361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.385 [2024-11-19 00:01:54.910930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 318.994 ms, result 0 00:17:48.385 [2024-11-19 00:01:54.911931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.385 [2024-11-19 00:01:54.925393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.647  [2024-11-19T00:01:55.339Z] Copying: 4096/4096 [kB] (average 11 MBps)[2024-11-19 00:01:55.281673] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.648 [2024-11-19 00:01:55.290572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.648 [2024-11-19 00:01:55.290623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:48.648 [2024-11-19 00:01:55.290636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.648 [2024-11-19 00:01:55.290652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.648 [2024-11-19 00:01:55.290673] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:48.648 [2024-11-19 00:01:55.293661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.648 [2024-11-19 00:01:55.293704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:48.648 [2024-11-19 00:01:55.293716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.974 ms 00:17:48.648 [2024-11-19 00:01:55.293724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.648 [2024-11-19 00:01:55.296210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.648 [2024-11-19 00:01:55.296257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:48.648 [2024-11-19 00:01:55.296268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.446 ms 00:17:48.648 [2024-11-19 00:01:55.296277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.648 [2024-11-19 00:01:55.300658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.648 [2024-11-19 00:01:55.300698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:48.648 [2024-11-19 00:01:55.300708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.365 ms 00:17:48.648 [2024-11-19 00:01:55.300715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.648 [2024-11-19 00:01:55.307832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.648 [2024-11-19 00:01:55.307869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:48.648 [2024-11-19 00:01:55.307879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.085 ms 00:17:48.648 [2024-11-19 00:01:55.307886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.648 [2024-11-19 00:01:55.332494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.648 [2024-11-19 00:01:55.332545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:48.648 [2024-11-19 00:01:55.332558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.533 ms 00:17:48.648 [2024-11-19 00:01:55.332565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.910 [2024-11-19 00:01:55.348304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.910 [2024-11-19 00:01:55.348356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:48.910 [2024-11-19 00:01:55.348372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.692 ms 00:17:48.910 [2024-11-19 00:01:55.348380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.910 [2024-11-19 00:01:55.348526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.910 [2024-11-19 00:01:55.348537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:48.910 [2024-11-19 00:01:55.348547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:48.910 [2024-11-19 00:01:55.348555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.910 [2024-11-19 00:01:55.373752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.910 [2024-11-19 00:01:55.373799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:48.910 [2024-11-19 00:01:55.373811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.170 ms 00:17:48.910 [2024-11-19 00:01:55.373817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.910 [2024-11-19 00:01:55.398839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.910 [2024-11-19 00:01:55.398883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:48.910 [2024-11-19 00:01:55.398893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.963 ms 00:17:48.910 [2024-11-19 00:01:55.398900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.910 [2024-11-19 00:01:55.423882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.910 [2024-11-19 00:01:55.423927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:48.910 [2024-11-19 00:01:55.423937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.936 ms 00:17:48.910 [2024-11-19 00:01:55.423944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.910 [2024-11-19 00:01:55.448728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.910 [2024-11-19 00:01:55.448767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:48.910 [2024-11-19 00:01:55.448778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.711 ms 00:17:48.910 [2024-11-19 00:01:55.448785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.910 [2024-11-19 00:01:55.448830] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:48.910 [2024-11-19 00:01:55.448846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:48.910 [2024-11-19 00:01:55.448976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.448984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.448991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.448998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:48.911 [2024-11-19 00:01:55.449637] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:48.911 [2024-11-19 00:01:55.449645] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00376b2c-fdd4-4b48-80c2-de968aa5cce4 00:17:48.911 [2024-11-19 00:01:55.449653] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:48.911 [2024-11-19 00:01:55.449660] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:48.911 [2024-11-19 00:01:55.449667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:48.911 [2024-11-19 00:01:55.449676] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:48.911 [2024-11-19 00:01:55.449683] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:48.911 [2024-11-19 00:01:55.449692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:48.912 [2024-11-19 00:01:55.449699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:48.912 [2024-11-19 00:01:55.449706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:48.912 [2024-11-19 00:01:55.449721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:48.912 [2024-11-19 00:01:55.449729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.912 [2024-11-19 00:01:55.449740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:48.912 [2024-11-19 00:01:55.449749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:17:48.912 [2024-11-19 00:01:55.449757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.912 [2024-11-19 00:01:55.462791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.912 [2024-11-19 00:01:55.462830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:48.912 [2024-11-19 00:01:55.462841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.003 ms 00:17:48.912 [2024-11-19 00:01:55.462849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.912 [2024-11-19 00:01:55.463274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.912 [2024-11-19 00:01:55.463292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:48.912 [2024-11-19 00:01:55.463301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:17:48.912 [2024-11-19 00:01:55.463309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.912 [2024-11-19 00:01:55.502216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.912 [2024-11-19 00:01:55.502263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.912 [2024-11-19 00:01:55.502273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.912 [2024-11-19 00:01:55.502282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.912 [2024-11-19 00:01:55.502367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.912 [2024-11-19 00:01:55.502377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.912 [2024-11-19 00:01:55.502385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.912 [2024-11-19 00:01:55.502393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.912 [2024-11-19 00:01:55.502440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.912 [2024-11-19 00:01:55.502450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.912 [2024-11-19 00:01:55.502458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.912 [2024-11-19 00:01:55.502466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.912 [2024-11-19 00:01:55.502484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.912 [2024-11-19 00:01:55.502496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.912 [2024-11-19 00:01:55.502504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.912 [2024-11-19 00:01:55.502511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.912 [2024-11-19 00:01:55.585760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.912 [2024-11-19 00:01:55.585818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.912 [2024-11-19 00:01:55.585832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.912 [2024-11-19 00:01:55.585841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.173 [2024-11-19 00:01:55.653908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.173 [2024-11-19 00:01:55.653965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.173 [2024-11-19 00:01:55.653978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.173 [2024-11-19 00:01:55.653987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.173 [2024-11-19 00:01:55.654044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.173 [2024-11-19 00:01:55.654054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.173 [2024-11-19 00:01:55.654063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.173 [2024-11-19 00:01:55.654071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.173 [2024-11-19 00:01:55.654105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.173 [2024-11-19 00:01:55.654115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.173 [2024-11-19 00:01:55.654150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.173 [2024-11-19 00:01:55.654159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.173 [2024-11-19 00:01:55.654262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.173 [2024-11-19 00:01:55.654273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.173 [2024-11-19 00:01:55.654281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.173 [2024-11-19 00:01:55.654289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.173 [2024-11-19 00:01:55.654322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.173 [2024-11-19 00:01:55.654333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.173 [2024-11-19 00:01:55.654342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.173 [2024-11-19 00:01:55.654353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.173 [2024-11-19 00:01:55.654395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.173 [2024-11-19 00:01:55.654405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.173 [2024-11-19 00:01:55.654413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.173 [2024-11-19 00:01:55.654422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.173 [2024-11-19 00:01:55.654471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.173 [2024-11-19 00:01:55.654483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.173 [2024-11-19 00:01:55.654494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.173 [2024-11-19 00:01:55.654502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.173 [2024-11-19 00:01:55.654657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 364.054 ms, result 0 00:17:49.747 00:17:49.747 00:17:49.747 00:01:56 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74244 00:17:49.747 00:01:56 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74244 00:17:49.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.747 00:01:56 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74244 ']' 00:17:49.747 00:01:56 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:49.747 00:01:56 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.747 00:01:56 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.747 00:01:56 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.747 00:01:56 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.747 00:01:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:50.008 [2024-11-19 00:01:56.503635] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:50.009 [2024-11-19 00:01:56.503779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74244 ] 00:17:50.009 [2024-11-19 00:01:56.667729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.270 [2024-11-19 00:01:56.786053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.844 00:01:57 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.844 00:01:57 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:50.844 00:01:57 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:51.105 [2024-11-19 00:01:57.658391] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.105 [2024-11-19 00:01:57.658465] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.368 [2024-11-19 00:01:57.836788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.368 [2024-11-19 00:01:57.836852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:51.368 [2024-11-19 00:01:57.836870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:51.368 [2024-11-19 00:01:57.836878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.368 [2024-11-19 00:01:57.839874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.368 [2024-11-19 00:01:57.839925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.368 [2024-11-19 00:01:57.839937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.974 ms 00:17:51.368 [2024-11-19 00:01:57.839946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.368 [2024-11-19 00:01:57.840064] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:51.368 [2024-11-19 00:01:57.840784] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:51.368 [2024-11-19 00:01:57.840810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.840818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.369 [2024-11-19 00:01:57.840830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:17:51.369 [2024-11-19 00:01:57.840837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.842573] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:51.369 [2024-11-19 00:01:57.856646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.856703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:51.369 [2024-11-19 00:01:57.856717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.081 ms 00:17:51.369 [2024-11-19 00:01:57.856728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.856837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.856854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:51.369 [2024-11-19 00:01:57.856863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:51.369 [2024-11-19 00:01:57.856872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.864755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.864808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.369 [2024-11-19 00:01:57.864818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.820 ms 00:17:51.369 [2024-11-19 00:01:57.864828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.864939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.864953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.369 [2024-11-19 00:01:57.864962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:51.369 [2024-11-19 00:01:57.864972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.865007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.865019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:51.369 [2024-11-19 00:01:57.865027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:51.369 [2024-11-19 00:01:57.865037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.865061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:51.369 [2024-11-19 00:01:57.869054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.869097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.369 [2024-11-19 00:01:57.869109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:17:51.369 [2024-11-19 00:01:57.869119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.869206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.869217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:51.369 [2024-11-19 00:01:57.869228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:51.369 [2024-11-19 00:01:57.869239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.869262] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:51.369 [2024-11-19 00:01:57.869283] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:51.369 [2024-11-19 00:01:57.869328] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:51.369 [2024-11-19 00:01:57.869344] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:51.369 [2024-11-19 00:01:57.869452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:51.369 [2024-11-19 00:01:57.869463] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:51.369 [2024-11-19 00:01:57.869479] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:51.369 [2024-11-19 00:01:57.869492] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:51.369 [2024-11-19 00:01:57.869503] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:51.369 [2024-11-19 00:01:57.869511] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:51.369 [2024-11-19 00:01:57.869521] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:51.369 [2024-11-19 00:01:57.869529] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:51.369 [2024-11-19 00:01:57.869540] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:51.369 [2024-11-19 00:01:57.869548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.869558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:51.369 [2024-11-19 00:01:57.869565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:51.369 [2024-11-19 00:01:57.869575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.869665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.369 [2024-11-19 00:01:57.869675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:51.369 [2024-11-19 00:01:57.869684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:51.369 [2024-11-19 00:01:57.869693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.369 [2024-11-19 00:01:57.869793] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:51.369 [2024-11-19 00:01:57.869805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:51.369 [2024-11-19 00:01:57.869814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.369 [2024-11-19 00:01:57.869823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.369 [2024-11-19 00:01:57.869832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:51.369 [2024-11-19 00:01:57.869840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:51.369 [2024-11-19 00:01:57.869847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:51.369 [2024-11-19 00:01:57.869858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:51.369 [2024-11-19 00:01:57.869865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:51.369 [2024-11-19 00:01:57.869874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.369 [2024-11-19 00:01:57.869882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:51.369 [2024-11-19 00:01:57.869889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:51.369 [2024-11-19 00:01:57.869896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.369 [2024-11-19 00:01:57.869904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:51.369 [2024-11-19 00:01:57.869913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:51.369 [2024-11-19 00:01:57.869922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.369 [2024-11-19 00:01:57.869929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:51.369 [2024-11-19 00:01:57.869938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:51.369 [2024-11-19 00:01:57.869944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.369 [2024-11-19 00:01:57.869953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:51.369 [2024-11-19 00:01:57.869966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:51.369 [2024-11-19 00:01:57.869974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.369 [2024-11-19 00:01:57.869981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:51.369 [2024-11-19 00:01:57.869991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:51.369 [2024-11-19 00:01:57.869998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.369 [2024-11-19 00:01:57.870007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:51.369 [2024-11-19 00:01:57.870014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:51.369 [2024-11-19 00:01:57.870022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.369 [2024-11-19 00:01:57.870028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:51.369 [2024-11-19 00:01:57.870036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:51.369 [2024-11-19 00:01:57.870043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.369 [2024-11-19 00:01:57.870052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:51.369 [2024-11-19 00:01:57.870058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:51.370 [2024-11-19 00:01:57.870067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.370 [2024-11-19 00:01:57.870074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:51.370 [2024-11-19 00:01:57.870082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:51.370 [2024-11-19 00:01:57.870088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.370 [2024-11-19 00:01:57.870096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:51.370 [2024-11-19 00:01:57.870103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:51.370 [2024-11-19 00:01:57.870113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.370 [2024-11-19 00:01:57.870134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:51.370 [2024-11-19 00:01:57.870143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:51.370 [2024-11-19 00:01:57.870150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.370 [2024-11-19 00:01:57.870158] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:51.370 [2024-11-19 00:01:57.870166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:51.370 [2024-11-19 00:01:57.870177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.370 [2024-11-19 00:01:57.870186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.370 [2024-11-19 00:01:57.870196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:51.370 [2024-11-19 00:01:57.870203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:51.370 [2024-11-19 00:01:57.870212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:51.370 [2024-11-19 00:01:57.870219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:51.370 [2024-11-19 00:01:57.870227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:51.370 [2024-11-19 00:01:57.870235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:51.370 [2024-11-19 00:01:57.870245] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:51.370 [2024-11-19 00:01:57.870255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.370 [2024-11-19 00:01:57.870267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:51.370 [2024-11-19 00:01:57.870275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:51.370 [2024-11-19 00:01:57.870285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:51.370 [2024-11-19 00:01:57.870292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:51.370 [2024-11-19 00:01:57.870303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:51.370 [2024-11-19 00:01:57.870310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:51.370 [2024-11-19 00:01:57.870318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:51.370 [2024-11-19 00:01:57.870325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:51.370 [2024-11-19 00:01:57.870334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:51.370 [2024-11-19 00:01:57.870341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:51.370 [2024-11-19 00:01:57.870350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:51.370 [2024-11-19 00:01:57.870357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:51.370 [2024-11-19 00:01:57.870366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:51.370 [2024-11-19 00:01:57.870373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:51.370 [2024-11-19 00:01:57.870382] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:51.370 [2024-11-19 00:01:57.870390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.370 [2024-11-19 00:01:57.870401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:51.370 [2024-11-19 00:01:57.870409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:51.370 [2024-11-19 00:01:57.870418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:51.370 [2024-11-19 00:01:57.870425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:51.370 [2024-11-19 00:01:57.870434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.370 [2024-11-19 00:01:57.870441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:51.370 [2024-11-19 00:01:57.870451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:17:51.370 [2024-11-19 00:01:57.870460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.370 [2024-11-19 00:01:57.901767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.370 [2024-11-19 00:01:57.901815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.370 [2024-11-19 00:01:57.901830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.243 ms 00:17:51.370 [2024-11-19 00:01:57.901838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.370 [2024-11-19 00:01:57.901973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.370 [2024-11-19 00:01:57.901984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:51.370 [2024-11-19 00:01:57.901995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:51.370 [2024-11-19 00:01:57.902003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.370 [2024-11-19 00:01:57.936609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.370 [2024-11-19 00:01:57.936654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.370 [2024-11-19 00:01:57.936672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.580 ms 00:17:51.370 [2024-11-19 00:01:57.936680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.370 [2024-11-19 00:01:57.936767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.370 [2024-11-19 00:01:57.936777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.370 [2024-11-19 00:01:57.936788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:51.370 [2024-11-19 00:01:57.936796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.370 [2024-11-19 00:01:57.937382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.370 [2024-11-19 00:01:57.937413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.370 [2024-11-19 00:01:57.937429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:17:51.370 [2024-11-19 00:01:57.937437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.370 [2024-11-19 00:01:57.937588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.370 [2024-11-19 00:01:57.937604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.370 [2024-11-19 00:01:57.937615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:51.370 [2024-11-19 00:01:57.937623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.370 [2024-11-19 00:01:57.955207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.370 [2024-11-19 00:01:57.955252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.370 [2024-11-19 00:01:57.955265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.556 ms 00:17:51.370 [2024-11-19 00:01:57.955273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.370 [2024-11-19 00:01:57.969413] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:51.370 [2024-11-19 00:01:57.969461] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:51.370 [2024-11-19 00:01:57.969478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.370 [2024-11-19 00:01:57.969486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:51.371 [2024-11-19 00:01:57.969498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.093 ms 00:17:51.371 [2024-11-19 00:01:57.969505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.371 [2024-11-19 00:01:57.995002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.371 [2024-11-19 00:01:57.995052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:51.371 [2024-11-19 00:01:57.995067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.405 ms 00:17:51.371 [2024-11-19 00:01:57.995075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.371 [2024-11-19 00:01:58.007978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.371 [2024-11-19 00:01:58.008021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:51.371 [2024-11-19 00:01:58.008038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.796 ms 00:17:51.371 [2024-11-19 00:01:58.008045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.371 [2024-11-19 00:01:58.020359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.371 [2024-11-19 00:01:58.020403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:51.371 [2024-11-19 00:01:58.020417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.200 ms 00:17:51.371 [2024-11-19 00:01:58.020425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.371 [2024-11-19 00:01:58.021065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.371 [2024-11-19 00:01:58.021089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:51.371 [2024-11-19 00:01:58.021101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:17:51.371 [2024-11-19 00:01:58.021108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 00:01:58.093110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.632 [2024-11-19 00:01:58.093198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:51.632 [2024-11-19 00:01:58.093218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.955 ms 00:17:51.632 [2024-11-19 00:01:58.093229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 00:01:58.104522] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:51.632 [2024-11-19 00:01:58.123635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.632 [2024-11-19 00:01:58.123694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:51.632 [2024-11-19 00:01:58.123712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.298 ms 00:17:51.632 [2024-11-19 00:01:58.123725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 00:01:58.123815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.632 [2024-11-19 00:01:58.123828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:51.632 [2024-11-19 00:01:58.123838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:51.632 [2024-11-19 00:01:58.123849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 00:01:58.123905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.632 [2024-11-19 00:01:58.123916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:51.632 [2024-11-19 00:01:58.123925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:51.632 [2024-11-19 00:01:58.123935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 00:01:58.123963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.632 [2024-11-19 00:01:58.123974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:51.632 [2024-11-19 00:01:58.123982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:51.632 [2024-11-19 00:01:58.123994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 00:01:58.124029] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:51.632 [2024-11-19 00:01:58.124043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.632 [2024-11-19 00:01:58.124052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:51.632 [2024-11-19 00:01:58.124066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:51.632 [2024-11-19 00:01:58.124073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 00:01:58.150100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.632 [2024-11-19 00:01:58.150184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:51.632 [2024-11-19 00:01:58.150203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.997 ms 00:17:51.632 [2024-11-19 00:01:58.150212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 00:01:58.150347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.632 [2024-11-19 00:01:58.150359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:51.632 [2024-11-19 00:01:58.150371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:51.632 [2024-11-19 00:01:58.150382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.632 [2024-11-19 00:01:58.151479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.632 [2024-11-19 00:01:58.154850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.321 ms, result 0 00:17:51.632 [2024-11-19 00:01:58.156931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.632 Some configs were skipped because the RPC state that can call them passed over. 00:17:51.632 00:01:58 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:51.894 [2024-11-19 00:01:58.397812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.894 [2024-11-19 00:01:58.397882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:51.894 [2024-11-19 00:01:58.397896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.244 ms 00:17:51.894 [2024-11-19 00:01:58.397907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.894 [2024-11-19 00:01:58.397943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.381 ms, result 0 00:17:51.894 true 00:17:51.894 00:01:58 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:52.155 [2024-11-19 00:01:58.613884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.155 [2024-11-19 00:01:58.613943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:52.155 [2024-11-19 00:01:58.613959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:17:52.155 [2024-11-19 00:01:58.613967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.155 [2024-11-19 00:01:58.614008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.114 ms, result 0 00:17:52.155 true 00:17:52.155 00:01:58 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74244 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74244 ']' 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74244 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74244 00:17:52.155 killing process with pid 74244 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74244' 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74244 00:17:52.155 00:01:58 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74244 00:17:52.727 [2024-11-19 00:01:59.414603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.728 [2024-11-19 00:01:59.414679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:52.728 [2024-11-19 00:01:59.414695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:52.728 [2024-11-19 00:01:59.414705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.728 [2024-11-19 00:01:59.414730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:52.991 [2024-11-19 00:01:59.417769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.417817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:52.991 [2024-11-19 00:01:59.417833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.018 ms 00:17:52.991 [2024-11-19 00:01:59.417841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.418191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.418203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:52.991 [2024-11-19 00:01:59.418215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:52.991 [2024-11-19 00:01:59.418223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.423567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.423611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:52.991 [2024-11-19 00:01:59.423627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.086 ms 00:17:52.991 [2024-11-19 00:01:59.423636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.430601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.430643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:52.991 [2024-11-19 00:01:59.430656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.918 ms 00:17:52.991 [2024-11-19 00:01:59.430664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.446766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.446876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:52.991 [2024-11-19 00:01:59.446925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.017 ms 00:17:52.991 [2024-11-19 00:01:59.446965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.456036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.456088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:52.991 [2024-11-19 00:01:59.456106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.926 ms 00:17:52.991 [2024-11-19 00:01:59.456114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.456282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.456294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:52.991 [2024-11-19 00:01:59.456305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:52.991 [2024-11-19 00:01:59.456315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.466925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.466968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:52.991 [2024-11-19 00:01:59.466981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.586 ms 00:17:52.991 [2024-11-19 00:01:59.466988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.477040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.477085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:52.991 [2024-11-19 00:01:59.477101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.998 ms 00:17:52.991 [2024-11-19 00:01:59.477108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.486571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.486613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:52.991 [2024-11-19 00:01:59.486628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.401 ms 00:17:52.991 [2024-11-19 00:01:59.486636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.496226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.991 [2024-11-19 00:01:59.496270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:52.991 [2024-11-19 00:01:59.496283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.510 ms 00:17:52.991 [2024-11-19 00:01:59.496289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.991 [2024-11-19 00:01:59.496350] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:52.991 [2024-11-19 00:01:59.496367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:52.991 [2024-11-19 00:01:59.496773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.496992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:52.992 [2024-11-19 00:01:59.497285] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:52.992 [2024-11-19 00:01:59.497300] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00376b2c-fdd4-4b48-80c2-de968aa5cce4 00:17:52.992 [2024-11-19 00:01:59.497315] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:52.992 [2024-11-19 00:01:59.497328] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:52.992 [2024-11-19 00:01:59.497336] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:52.992 [2024-11-19 00:01:59.497346] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:52.992 [2024-11-19 00:01:59.497353] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:52.992 [2024-11-19 00:01:59.497363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:52.992 [2024-11-19 00:01:59.497371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:52.992 [2024-11-19 00:01:59.497379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:52.992 [2024-11-19 00:01:59.497386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:52.992 [2024-11-19 00:01:59.497396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.992 [2024-11-19 00:01:59.497404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:52.992 [2024-11-19 00:01:59.497415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:17:52.992 [2024-11-19 00:01:59.497422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.992 [2024-11-19 00:01:59.510997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.992 [2024-11-19 00:01:59.511038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:52.992 [2024-11-19 00:01:59.511055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.533 ms 00:17:52.992 [2024-11-19 00:01:59.511063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.992 [2024-11-19 00:01:59.511532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.992 [2024-11-19 00:01:59.511599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:52.992 [2024-11-19 00:01:59.511612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:17:52.992 [2024-11-19 00:01:59.511623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.992 [2024-11-19 00:01:59.560402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.992 [2024-11-19 00:01:59.560448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.992 [2024-11-19 00:01:59.560463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.992 [2024-11-19 00:01:59.560472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.992 [2024-11-19 00:01:59.560569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.992 [2024-11-19 00:01:59.560579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.992 [2024-11-19 00:01:59.560591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.992 [2024-11-19 00:01:59.560602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.992 [2024-11-19 00:01:59.560659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.992 [2024-11-19 00:01:59.560670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.992 [2024-11-19 00:01:59.560684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.992 [2024-11-19 00:01:59.560692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.992 [2024-11-19 00:01:59.560712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.992 [2024-11-19 00:01:59.560720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.992 [2024-11-19 00:01:59.560730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.993 [2024-11-19 00:01:59.560738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.993 [2024-11-19 00:01:59.644603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.993 [2024-11-19 00:01:59.644659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.993 [2024-11-19 00:01:59.644675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.993 [2024-11-19 00:01:59.644684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.254 [2024-11-19 00:01:59.713434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.254 [2024-11-19 00:01:59.713497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:53.254 [2024-11-19 00:01:59.713512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.254 [2024-11-19 00:01:59.713525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.254 [2024-11-19 00:01:59.713604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.254 [2024-11-19 00:01:59.713614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.254 [2024-11-19 00:01:59.713630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.254 [2024-11-19 00:01:59.713638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.254 [2024-11-19 00:01:59.713674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.254 [2024-11-19 00:01:59.713684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.254 [2024-11-19 00:01:59.713694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.254 [2024-11-19 00:01:59.713702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.254 [2024-11-19 00:01:59.713806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.254 [2024-11-19 00:01:59.713816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.254 [2024-11-19 00:01:59.713826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.254 [2024-11-19 00:01:59.713834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.254 [2024-11-19 00:01:59.713870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.254 [2024-11-19 00:01:59.713880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:53.254 [2024-11-19 00:01:59.713890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.254 [2024-11-19 00:01:59.713898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.254 [2024-11-19 00:01:59.713945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.254 [2024-11-19 00:01:59.713957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.254 [2024-11-19 00:01:59.713970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.254 [2024-11-19 00:01:59.713978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.254 [2024-11-19 00:01:59.714032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.254 [2024-11-19 00:01:59.714043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.254 [2024-11-19 00:01:59.714054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.254 [2024-11-19 00:01:59.714060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.254 [2024-11-19 00:01:59.714251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 299.620 ms, result 0 00:17:53.826 00:02:00 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:53.826 [2024-11-19 00:02:00.496724] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:53.826 [2024-11-19 00:02:00.496874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74302 ] 00:17:54.088 [2024-11-19 00:02:00.655996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.088 [2024-11-19 00:02:00.738533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.349 [2024-11-19 00:02:00.942578] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.349 [2024-11-19 00:02:00.942628] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.610 [2024-11-19 00:02:01.101703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.101752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.610 [2024-11-19 00:02:01.101766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.610 [2024-11-19 00:02:01.101774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.610 [2024-11-19 00:02:01.104473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.104511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.610 [2024-11-19 00:02:01.104521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:17:54.610 [2024-11-19 00:02:01.104528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.610 [2024-11-19 00:02:01.104602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.610 [2024-11-19 00:02:01.105282] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.610 [2024-11-19 00:02:01.105307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.105315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.610 [2024-11-19 00:02:01.105323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:17:54.610 [2024-11-19 00:02:01.105330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.610 [2024-11-19 00:02:01.106568] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:54.610 [2024-11-19 00:02:01.119601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.119638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:54.610 [2024-11-19 00:02:01.119649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.034 ms 00:17:54.610 [2024-11-19 00:02:01.119657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.610 [2024-11-19 00:02:01.119744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.119755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:54.610 [2024-11-19 00:02:01.119763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:54.610 [2024-11-19 00:02:01.119770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.610 [2024-11-19 00:02:01.125266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.125296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.610 [2024-11-19 00:02:01.125305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.455 ms 00:17:54.610 [2024-11-19 00:02:01.125313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.610 [2024-11-19 00:02:01.125399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.125408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.610 [2024-11-19 00:02:01.125416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:54.610 [2024-11-19 00:02:01.125423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.610 [2024-11-19 00:02:01.125446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.125456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.610 [2024-11-19 00:02:01.125464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.610 [2024-11-19 00:02:01.125471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.610 [2024-11-19 00:02:01.125491] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:54.610 [2024-11-19 00:02:01.129011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.129039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.610 [2024-11-19 00:02:01.129048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.525 ms 00:17:54.610 [2024-11-19 00:02:01.129055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.610 [2024-11-19 00:02:01.129090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.610 [2024-11-19 00:02:01.129098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.610 [2024-11-19 00:02:01.129107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:54.610 [2024-11-19 00:02:01.129113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.129141] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:54.611 [2024-11-19 00:02:01.129161] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:54.611 [2024-11-19 00:02:01.129197] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:54.611 [2024-11-19 00:02:01.129212] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:54.611 [2024-11-19 00:02:01.129314] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.611 [2024-11-19 00:02:01.129324] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.611 [2024-11-19 00:02:01.129334] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:54.611 [2024-11-19 00:02:01.129344] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129355] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129363] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:54.611 [2024-11-19 00:02:01.129371] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.611 [2024-11-19 00:02:01.129378] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.611 [2024-11-19 00:02:01.129385] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.611 [2024-11-19 00:02:01.129393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.129400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.611 [2024-11-19 00:02:01.129408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:54.611 [2024-11-19 00:02:01.129414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.129513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.129529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.611 [2024-11-19 00:02:01.129539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:54.611 [2024-11-19 00:02:01.129546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.129646] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.611 [2024-11-19 00:02:01.129657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.611 [2024-11-19 00:02:01.129665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.611 [2024-11-19 00:02:01.129687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.611 [2024-11-19 00:02:01.129708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.611 [2024-11-19 00:02:01.129720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.611 [2024-11-19 00:02:01.129727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:54.611 [2024-11-19 00:02:01.129734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.611 [2024-11-19 00:02:01.129747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.611 [2024-11-19 00:02:01.129755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:54.611 [2024-11-19 00:02:01.129762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.611 [2024-11-19 00:02:01.129775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.611 [2024-11-19 00:02:01.129795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.611 [2024-11-19 00:02:01.129814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.611 [2024-11-19 00:02:01.129833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.611 [2024-11-19 00:02:01.129852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.611 [2024-11-19 00:02:01.129871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.611 [2024-11-19 00:02:01.129883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.611 [2024-11-19 00:02:01.129890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:54.611 [2024-11-19 00:02:01.129896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.611 [2024-11-19 00:02:01.129902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.611 [2024-11-19 00:02:01.129909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:54.611 [2024-11-19 00:02:01.129915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.611 [2024-11-19 00:02:01.129927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:54.611 [2024-11-19 00:02:01.129934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129941] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.611 [2024-11-19 00:02:01.129948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.611 [2024-11-19 00:02:01.129955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.611 [2024-11-19 00:02:01.129965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.611 [2024-11-19 00:02:01.129973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.611 [2024-11-19 00:02:01.129980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.611 [2024-11-19 00:02:01.129987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.611 [2024-11-19 00:02:01.129993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.611 [2024-11-19 00:02:01.130000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.611 [2024-11-19 00:02:01.130006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.611 [2024-11-19 00:02:01.130014] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.611 [2024-11-19 00:02:01.130024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.611 [2024-11-19 00:02:01.130032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:54.611 [2024-11-19 00:02:01.130039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:54.611 [2024-11-19 00:02:01.130046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:54.611 [2024-11-19 00:02:01.130053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:54.611 [2024-11-19 00:02:01.130059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:54.611 [2024-11-19 00:02:01.130067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:54.611 [2024-11-19 00:02:01.130074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:54.611 [2024-11-19 00:02:01.130080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:54.611 [2024-11-19 00:02:01.130087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:54.611 [2024-11-19 00:02:01.130093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:54.611 [2024-11-19 00:02:01.130100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:54.611 [2024-11-19 00:02:01.130107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:54.611 [2024-11-19 00:02:01.130114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:54.611 [2024-11-19 00:02:01.130133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:54.611 [2024-11-19 00:02:01.130141] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.611 [2024-11-19 00:02:01.130149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.611 [2024-11-19 00:02:01.130157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.611 [2024-11-19 00:02:01.130165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.611 [2024-11-19 00:02:01.130172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.611 [2024-11-19 00:02:01.130179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.611 [2024-11-19 00:02:01.130186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.130194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.611 [2024-11-19 00:02:01.130205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:17:54.611 [2024-11-19 00:02:01.130212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.158026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.158071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.611 [2024-11-19 00:02:01.158083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.752 ms 00:17:54.611 [2024-11-19 00:02:01.158092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.158233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.158248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.611 [2024-11-19 00:02:01.158282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:54.611 [2024-11-19 00:02:01.158289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.203872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.203924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.611 [2024-11-19 00:02:01.203937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.559 ms 00:17:54.611 [2024-11-19 00:02:01.203950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.204059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.204072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.611 [2024-11-19 00:02:01.204081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.611 [2024-11-19 00:02:01.204089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.204637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.204663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.611 [2024-11-19 00:02:01.204673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:17:54.611 [2024-11-19 00:02:01.204690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.204842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.204861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.611 [2024-11-19 00:02:01.204869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:17:54.611 [2024-11-19 00:02:01.204877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-11-19 00:02:01.220883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-11-19 00:02:01.220930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.611 [2024-11-19 00:02:01.220941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.983 ms 00:17:54.611 [2024-11-19 00:02:01.220949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-11-19 00:02:01.235332] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:54.612 [2024-11-19 00:02:01.235380] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.612 [2024-11-19 00:02:01.235394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-11-19 00:02:01.235402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.612 [2024-11-19 00:02:01.235412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.334 ms 00:17:54.612 [2024-11-19 00:02:01.235420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-11-19 00:02:01.261167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-11-19 00:02:01.261225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.612 [2024-11-19 00:02:01.261238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.644 ms 00:17:54.612 [2024-11-19 00:02:01.261246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-11-19 00:02:01.274274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-11-19 00:02:01.274459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.612 [2024-11-19 00:02:01.274479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.935 ms 00:17:54.612 [2024-11-19 00:02:01.274488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-11-19 00:02:01.287199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-11-19 00:02:01.287244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.612 [2024-11-19 00:02:01.287256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.631 ms 00:17:54.612 [2024-11-19 00:02:01.287263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-11-19 00:02:01.287930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-11-19 00:02:01.287956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.612 [2024-11-19 00:02:01.287967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:17:54.612 [2024-11-19 00:02:01.287976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-11-19 00:02:01.353093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-11-19 00:02:01.353173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.873 [2024-11-19 00:02:01.353188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.092 ms 00:17:54.873 [2024-11-19 00:02:01.353197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-11-19 00:02:01.364281] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.873 [2024-11-19 00:02:01.383148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-11-19 00:02:01.383200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.873 [2024-11-19 00:02:01.383213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.845 ms 00:17:54.873 [2024-11-19 00:02:01.383222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-11-19 00:02:01.383323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-11-19 00:02:01.383335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.873 [2024-11-19 00:02:01.383345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:54.873 [2024-11-19 00:02:01.383354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-11-19 00:02:01.383412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-11-19 00:02:01.383421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.873 [2024-11-19 00:02:01.383430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:54.873 [2024-11-19 00:02:01.383438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-11-19 00:02:01.383483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-11-19 00:02:01.383494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.873 [2024-11-19 00:02:01.383503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.873 [2024-11-19 00:02:01.383511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-11-19 00:02:01.383547] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.873 [2024-11-19 00:02:01.383557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-11-19 00:02:01.383566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.873 [2024-11-19 00:02:01.383575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:54.873 [2024-11-19 00:02:01.383583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-11-19 00:02:01.409437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-11-19 00:02:01.409485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.873 [2024-11-19 00:02:01.409499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.830 ms 00:17:54.873 [2024-11-19 00:02:01.409508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-11-19 00:02:01.409647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-11-19 00:02:01.409659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.873 [2024-11-19 00:02:01.409669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:54.873 [2024-11-19 00:02:01.409677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-11-19 00:02:01.410937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.873 [2024-11-19 00:02:01.414464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 308.906 ms, result 0 00:17:54.873 [2024-11-19 00:02:01.415873] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.873 [2024-11-19 00:02:01.429239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.817  [2024-11-19T00:02:03.894Z] Copying: 17/256 [MB] (17 MBps) [2024-11-19T00:02:04.835Z] Copying: 45/256 [MB] (28 MBps) [2024-11-19T00:02:05.780Z] Copying: 64/256 [MB] (19 MBps) [2024-11-19T00:02:06.724Z] Copying: 84/256 [MB] (19 MBps) [2024-11-19T00:02:07.666Z] Copying: 111/256 [MB] (27 MBps) [2024-11-19T00:02:08.610Z] Copying: 129/256 [MB] (17 MBps) [2024-11-19T00:02:09.551Z] Copying: 149/256 [MB] (19 MBps) [2024-11-19T00:02:10.495Z] Copying: 169/256 [MB] (20 MBps) [2024-11-19T00:02:11.882Z] Copying: 190/256 [MB] (21 MBps) [2024-11-19T00:02:12.825Z] Copying: 204/256 [MB] (13 MBps) [2024-11-19T00:02:13.769Z] Copying: 223/256 [MB] (18 MBps) [2024-11-19T00:02:14.343Z] Copying: 244/256 [MB] (20 MBps) [2024-11-19T00:02:14.604Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-19 00:02:14.577339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:07.912 [2024-11-19 00:02:14.592747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.913 [2024-11-19 00:02:14.592918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:07.913 [2024-11-19 00:02:14.592993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:07.913 [2024-11-19 00:02:14.593029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.913 [2024-11-19 00:02:14.593075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:07.913 [2024-11-19 00:02:14.596118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.913 [2024-11-19 00:02:14.596280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:07.913 [2024-11-19 00:02:14.596351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:18:07.913 [2024-11-19 00:02:14.596375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.913 [2024-11-19 00:02:14.596685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.913 [2024-11-19 00:02:14.596722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:07.913 [2024-11-19 00:02:14.596744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:07.913 [2024-11-19 00:02:14.596808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.913 [2024-11-19 00:02:14.600539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.913 [2024-11-19 00:02:14.600652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:07.913 [2024-11-19 00:02:14.600706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.698 ms 00:18:07.913 [2024-11-19 00:02:14.600728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.175 [2024-11-19 00:02:14.607728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.175 [2024-11-19 00:02:14.607764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:08.175 [2024-11-19 00:02:14.607775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.963 ms 00:18:08.175 [2024-11-19 00:02:14.607784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.175 [2024-11-19 00:02:14.633017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.175 [2024-11-19 00:02:14.633062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:08.175 [2024-11-19 00:02:14.633075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.165 ms 00:18:08.175 [2024-11-19 00:02:14.633083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.175 [2024-11-19 00:02:14.649655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.175 [2024-11-19 00:02:14.649708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:08.175 [2024-11-19 00:02:14.649721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.541 ms 00:18:08.175 [2024-11-19 00:02:14.649733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.175 [2024-11-19 00:02:14.649889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.175 [2024-11-19 00:02:14.649902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:08.175 [2024-11-19 00:02:14.649911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:08.175 [2024-11-19 00:02:14.649919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.175 [2024-11-19 00:02:14.675576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.175 [2024-11-19 00:02:14.675621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:08.175 [2024-11-19 00:02:14.675633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.627 ms 00:18:08.175 [2024-11-19 00:02:14.675641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.175 [2024-11-19 00:02:14.700608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.175 [2024-11-19 00:02:14.700653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:08.175 [2024-11-19 00:02:14.700665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.926 ms 00:18:08.175 [2024-11-19 00:02:14.700672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.175 [2024-11-19 00:02:14.725072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.175 [2024-11-19 00:02:14.725258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:08.175 [2024-11-19 00:02:14.725279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.371 ms 00:18:08.175 [2024-11-19 00:02:14.725286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.175 [2024-11-19 00:02:14.750031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.175 [2024-11-19 00:02:14.750073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:08.175 [2024-11-19 00:02:14.750085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.685 ms 00:18:08.175 [2024-11-19 00:02:14.750093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.175 [2024-11-19 00:02:14.750120] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:08.175 [2024-11-19 00:02:14.750162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:08.175 [2024-11-19 00:02:14.750421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:08.176 [2024-11-19 00:02:14.750955] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:08.176 [2024-11-19 00:02:14.750963] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00376b2c-fdd4-4b48-80c2-de968aa5cce4 00:18:08.176 [2024-11-19 00:02:14.750973] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:08.176 [2024-11-19 00:02:14.750981] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:08.176 [2024-11-19 00:02:14.750989] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:08.176 [2024-11-19 00:02:14.750997] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:08.176 [2024-11-19 00:02:14.751005] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:08.176 [2024-11-19 00:02:14.751013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:08.176 [2024-11-19 00:02:14.751021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:08.176 [2024-11-19 00:02:14.751028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:08.176 [2024-11-19 00:02:14.751035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:08.176 [2024-11-19 00:02:14.751042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.176 [2024-11-19 00:02:14.751053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:08.176 [2024-11-19 00:02:14.751062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:18:08.176 [2024-11-19 00:02:14.751069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.176 [2024-11-19 00:02:14.764585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.176 [2024-11-19 00:02:14.764749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:08.176 [2024-11-19 00:02:14.764766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.481 ms 00:18:08.176 [2024-11-19 00:02:14.764774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.176 [2024-11-19 00:02:14.765196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.176 [2024-11-19 00:02:14.765210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:08.176 [2024-11-19 00:02:14.765220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:18:08.176 [2024-11-19 00:02:14.765227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.176 [2024-11-19 00:02:14.804080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.176 [2024-11-19 00:02:14.804276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.176 [2024-11-19 00:02:14.804297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.176 [2024-11-19 00:02:14.804306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.176 [2024-11-19 00:02:14.804416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.176 [2024-11-19 00:02:14.804426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.177 [2024-11-19 00:02:14.804434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.177 [2024-11-19 00:02:14.804442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.177 [2024-11-19 00:02:14.804494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.177 [2024-11-19 00:02:14.804505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.177 [2024-11-19 00:02:14.804513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.177 [2024-11-19 00:02:14.804521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.177 [2024-11-19 00:02:14.804539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.177 [2024-11-19 00:02:14.804551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.177 [2024-11-19 00:02:14.804558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.177 [2024-11-19 00:02:14.804566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.438 [2024-11-19 00:02:14.888913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.438 [2024-11-19 00:02:14.888968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.438 [2024-11-19 00:02:14.888981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.438 [2024-11-19 00:02:14.888989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.438 [2024-11-19 00:02:14.959417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.438 [2024-11-19 00:02:14.959628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.438 [2024-11-19 00:02:14.959648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.438 [2024-11-19 00:02:14.959658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.438 [2024-11-19 00:02:14.959735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.438 [2024-11-19 00:02:14.959745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.438 [2024-11-19 00:02:14.959754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.438 [2024-11-19 00:02:14.959763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.438 [2024-11-19 00:02:14.959795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.438 [2024-11-19 00:02:14.959803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.438 [2024-11-19 00:02:14.959818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.438 [2024-11-19 00:02:14.959826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.438 [2024-11-19 00:02:14.959932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.438 [2024-11-19 00:02:14.959943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.438 [2024-11-19 00:02:14.959952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.438 [2024-11-19 00:02:14.959960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.438 [2024-11-19 00:02:14.959993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.438 [2024-11-19 00:02:14.960003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:08.438 [2024-11-19 00:02:14.960012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.438 [2024-11-19 00:02:14.960024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.438 [2024-11-19 00:02:14.960068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.438 [2024-11-19 00:02:14.960078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.438 [2024-11-19 00:02:14.960087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.438 [2024-11-19 00:02:14.960095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.438 [2024-11-19 00:02:14.960170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.438 [2024-11-19 00:02:14.960183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.438 [2024-11-19 00:02:14.960195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.438 [2024-11-19 00:02:14.960203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.438 [2024-11-19 00:02:14.960358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.607 ms, result 0 00:18:09.010 00:18:09.010 00:18:09.271 00:02:15 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:09.844 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:09.844 00:02:16 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:09.844 00:02:16 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:09.844 00:02:16 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:09.844 00:02:16 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:09.844 00:02:16 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:09.844 00:02:16 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:09.844 00:02:16 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74244 00:18:09.844 00:02:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74244 ']' 00:18:09.844 00:02:16 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74244 00:18:09.844 Process with pid 74244 is not found 00:18:09.844 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74244) - No such process 00:18:09.844 00:02:16 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 74244 is not found' 00:18:09.844 00:18:09.844 real 1m14.251s 00:18:09.844 user 1m31.018s 00:18:09.844 sys 0m14.471s 00:18:09.844 ************************************ 00:18:09.844 00:02:16 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.844 00:02:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:09.844 END TEST ftl_trim 00:18:09.844 ************************************ 00:18:09.844 00:02:16 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:09.844 00:02:16 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:09.844 00:02:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.844 00:02:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:09.844 ************************************ 00:18:09.844 START TEST ftl_restore 00:18:09.844 ************************************ 00:18:09.844 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:09.844 * Looking for test storage... 00:18:09.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:09.844 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:09.844 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:18:09.844 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.107 00:02:16 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.107 --rc genhtml_branch_coverage=1 00:18:10.107 --rc genhtml_function_coverage=1 00:18:10.107 --rc genhtml_legend=1 00:18:10.107 --rc geninfo_all_blocks=1 00:18:10.107 --rc geninfo_unexecuted_blocks=1 00:18:10.107 00:18:10.107 ' 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.107 --rc genhtml_branch_coverage=1 00:18:10.107 --rc genhtml_function_coverage=1 00:18:10.107 --rc genhtml_legend=1 00:18:10.107 --rc geninfo_all_blocks=1 00:18:10.107 --rc geninfo_unexecuted_blocks=1 00:18:10.107 00:18:10.107 ' 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.107 --rc genhtml_branch_coverage=1 00:18:10.107 --rc genhtml_function_coverage=1 00:18:10.107 --rc genhtml_legend=1 00:18:10.107 --rc geninfo_all_blocks=1 00:18:10.107 --rc geninfo_unexecuted_blocks=1 00:18:10.107 00:18:10.107 ' 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.107 --rc genhtml_branch_coverage=1 00:18:10.107 --rc genhtml_function_coverage=1 00:18:10.107 --rc genhtml_legend=1 00:18:10.107 --rc geninfo_all_blocks=1 00:18:10.107 --rc geninfo_unexecuted_blocks=1 00:18:10.107 00:18:10.107 ' 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.nhbsZnY04O 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74535 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74535 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 74535 ']' 00:18:10.107 00:02:16 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.107 00:02:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:10.107 [2024-11-19 00:02:16.688467] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:10.107 [2024-11-19 00:02:16.688618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74535 ] 00:18:10.369 [2024-11-19 00:02:16.853438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.369 [2024-11-19 00:02:16.974230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.353 00:02:17 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.353 00:02:17 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:18:11.353 00:02:17 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:11.353 00:02:17 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:11.353 00:02:17 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:11.353 00:02:17 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:11.353 00:02:17 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:11.353 00:02:17 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:11.353 00:02:17 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:11.353 00:02:17 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:11.353 00:02:17 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:11.353 00:02:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:11.353 00:02:17 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:11.353 00:02:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:11.353 00:02:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:11.353 00:02:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:11.638 00:02:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:11.638 { 00:18:11.638 "name": "nvme0n1", 00:18:11.638 "aliases": [ 00:18:11.638 "d7b88281-29fb-48e4-9a1e-39f0c8628859" 00:18:11.638 ], 00:18:11.638 "product_name": "NVMe disk", 00:18:11.638 "block_size": 4096, 00:18:11.638 "num_blocks": 1310720, 00:18:11.638 "uuid": "d7b88281-29fb-48e4-9a1e-39f0c8628859", 00:18:11.638 "numa_id": -1, 00:18:11.638 "assigned_rate_limits": { 00:18:11.638 "rw_ios_per_sec": 0, 00:18:11.638 "rw_mbytes_per_sec": 0, 00:18:11.638 "r_mbytes_per_sec": 0, 00:18:11.638 "w_mbytes_per_sec": 0 00:18:11.638 }, 00:18:11.638 "claimed": true, 00:18:11.638 "claim_type": "read_many_write_one", 00:18:11.638 "zoned": false, 00:18:11.638 "supported_io_types": { 00:18:11.638 "read": true, 00:18:11.638 "write": true, 00:18:11.638 "unmap": true, 00:18:11.638 "flush": true, 00:18:11.638 "reset": true, 00:18:11.638 "nvme_admin": true, 00:18:11.638 "nvme_io": true, 00:18:11.638 "nvme_io_md": false, 00:18:11.638 "write_zeroes": true, 00:18:11.638 "zcopy": false, 00:18:11.638 "get_zone_info": false, 00:18:11.638 "zone_management": false, 00:18:11.638 "zone_append": false, 00:18:11.638 "compare": true, 00:18:11.638 "compare_and_write": false, 00:18:11.638 "abort": true, 00:18:11.638 "seek_hole": false, 00:18:11.638 "seek_data": false, 00:18:11.638 "copy": true, 00:18:11.638 "nvme_iov_md": false 00:18:11.638 }, 00:18:11.638 "driver_specific": { 00:18:11.638 "nvme": [ 00:18:11.638 { 00:18:11.638 "pci_address": "0000:00:11.0", 00:18:11.638 "trid": { 00:18:11.638 "trtype": "PCIe", 00:18:11.638 "traddr": "0000:00:11.0" 00:18:11.638 }, 00:18:11.638 "ctrlr_data": { 00:18:11.638 "cntlid": 0, 00:18:11.638 "vendor_id": "0x1b36", 00:18:11.638 "model_number": "QEMU NVMe Ctrl", 00:18:11.638 "serial_number": "12341", 00:18:11.638 "firmware_revision": "8.0.0", 00:18:11.638 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:11.638 "oacs": { 00:18:11.638 "security": 0, 00:18:11.638 "format": 1, 00:18:11.638 "firmware": 0, 00:18:11.638 "ns_manage": 1 00:18:11.638 }, 00:18:11.638 "multi_ctrlr": false, 00:18:11.638 "ana_reporting": false 00:18:11.638 }, 00:18:11.638 "vs": { 00:18:11.638 "nvme_version": "1.4" 00:18:11.638 }, 00:18:11.638 "ns_data": { 00:18:11.638 "id": 1, 00:18:11.638 "can_share": false 00:18:11.638 } 00:18:11.638 } 00:18:11.638 ], 00:18:11.638 "mp_policy": "active_passive" 00:18:11.638 } 00:18:11.638 } 00:18:11.638 ]' 00:18:11.639 00:02:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:11.639 00:02:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:11.639 00:02:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:11.639 00:02:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:11.639 00:02:18 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:11.639 00:02:18 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:18:11.639 00:02:18 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:11.639 00:02:18 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:11.639 00:02:18 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:11.639 00:02:18 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:11.639 00:02:18 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:11.900 00:02:18 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=82b726b8-329a-48ba-8999-48e3353aa757 00:18:11.900 00:02:18 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:11.900 00:02:18 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82b726b8-329a-48ba-8999-48e3353aa757 00:18:12.161 00:02:18 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:12.421 00:02:18 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=0ae7e3db-c8a3-4cf0-9470-faa269303142 00:18:12.421 00:02:18 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0ae7e3db-c8a3-4cf0-9470-faa269303142 00:18:12.681 00:02:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=6163454b-71e3-47cd-b2cf-12b18db97633 00:18:12.681 00:02:19 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:12.681 00:02:19 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6163454b-71e3-47cd-b2cf-12b18db97633 00:18:12.681 00:02:19 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:12.681 00:02:19 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:12.681 00:02:19 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=6163454b-71e3-47cd-b2cf-12b18db97633 00:18:12.681 00:02:19 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:12.681 00:02:19 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 6163454b-71e3-47cd-b2cf-12b18db97633 00:18:12.681 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6163454b-71e3-47cd-b2cf-12b18db97633 00:18:12.681 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:12.681 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:12.681 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:12.681 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6163454b-71e3-47cd-b2cf-12b18db97633 00:18:12.681 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:12.681 { 00:18:12.681 "name": "6163454b-71e3-47cd-b2cf-12b18db97633", 00:18:12.681 "aliases": [ 00:18:12.681 "lvs/nvme0n1p0" 00:18:12.681 ], 00:18:12.681 "product_name": "Logical Volume", 00:18:12.681 "block_size": 4096, 00:18:12.681 "num_blocks": 26476544, 00:18:12.681 "uuid": "6163454b-71e3-47cd-b2cf-12b18db97633", 00:18:12.681 "assigned_rate_limits": { 00:18:12.681 "rw_ios_per_sec": 0, 00:18:12.681 "rw_mbytes_per_sec": 0, 00:18:12.681 "r_mbytes_per_sec": 0, 00:18:12.681 "w_mbytes_per_sec": 0 00:18:12.681 }, 00:18:12.681 "claimed": false, 00:18:12.681 "zoned": false, 00:18:12.681 "supported_io_types": { 00:18:12.681 "read": true, 00:18:12.681 "write": true, 00:18:12.681 "unmap": true, 00:18:12.681 "flush": false, 00:18:12.681 "reset": true, 00:18:12.681 "nvme_admin": false, 00:18:12.681 "nvme_io": false, 00:18:12.681 "nvme_io_md": false, 00:18:12.681 "write_zeroes": true, 00:18:12.681 "zcopy": false, 00:18:12.681 "get_zone_info": false, 00:18:12.681 "zone_management": false, 00:18:12.681 "zone_append": false, 00:18:12.681 "compare": false, 00:18:12.681 "compare_and_write": false, 00:18:12.681 "abort": false, 00:18:12.681 "seek_hole": true, 00:18:12.681 "seek_data": true, 00:18:12.681 "copy": false, 00:18:12.681 "nvme_iov_md": false 00:18:12.681 }, 00:18:12.681 "driver_specific": { 00:18:12.681 "lvol": { 00:18:12.682 "lvol_store_uuid": "0ae7e3db-c8a3-4cf0-9470-faa269303142", 00:18:12.682 "base_bdev": "nvme0n1", 00:18:12.682 "thin_provision": true, 00:18:12.682 "num_allocated_clusters": 0, 00:18:12.682 "snapshot": false, 00:18:12.682 "clone": false, 00:18:12.682 "esnap_clone": false 00:18:12.682 } 00:18:12.682 } 00:18:12.682 } 00:18:12.682 ]' 00:18:12.682 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:12.941 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:12.941 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:12.941 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:12.941 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:12.941 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:12.941 00:02:19 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:12.941 00:02:19 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:12.941 00:02:19 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:13.202 00:02:19 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:13.202 00:02:19 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:13.202 00:02:19 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 6163454b-71e3-47cd-b2cf-12b18db97633 00:18:13.202 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6163454b-71e3-47cd-b2cf-12b18db97633 00:18:13.202 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:13.202 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:13.202 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:13.202 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6163454b-71e3-47cd-b2cf-12b18db97633 00:18:13.464 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:13.464 { 00:18:13.464 "name": "6163454b-71e3-47cd-b2cf-12b18db97633", 00:18:13.464 "aliases": [ 00:18:13.464 "lvs/nvme0n1p0" 00:18:13.464 ], 00:18:13.464 "product_name": "Logical Volume", 00:18:13.464 "block_size": 4096, 00:18:13.464 "num_blocks": 26476544, 00:18:13.464 "uuid": "6163454b-71e3-47cd-b2cf-12b18db97633", 00:18:13.464 "assigned_rate_limits": { 00:18:13.464 "rw_ios_per_sec": 0, 00:18:13.464 "rw_mbytes_per_sec": 0, 00:18:13.464 "r_mbytes_per_sec": 0, 00:18:13.464 "w_mbytes_per_sec": 0 00:18:13.464 }, 00:18:13.464 "claimed": false, 00:18:13.464 "zoned": false, 00:18:13.464 "supported_io_types": { 00:18:13.464 "read": true, 00:18:13.464 "write": true, 00:18:13.464 "unmap": true, 00:18:13.464 "flush": false, 00:18:13.464 "reset": true, 00:18:13.464 "nvme_admin": false, 00:18:13.464 "nvme_io": false, 00:18:13.464 "nvme_io_md": false, 00:18:13.464 "write_zeroes": true, 00:18:13.464 "zcopy": false, 00:18:13.464 "get_zone_info": false, 00:18:13.464 "zone_management": false, 00:18:13.464 "zone_append": false, 00:18:13.464 "compare": false, 00:18:13.464 "compare_and_write": false, 00:18:13.464 "abort": false, 00:18:13.464 "seek_hole": true, 00:18:13.464 "seek_data": true, 00:18:13.464 "copy": false, 00:18:13.464 "nvme_iov_md": false 00:18:13.464 }, 00:18:13.464 "driver_specific": { 00:18:13.464 "lvol": { 00:18:13.464 "lvol_store_uuid": "0ae7e3db-c8a3-4cf0-9470-faa269303142", 00:18:13.464 "base_bdev": "nvme0n1", 00:18:13.464 "thin_provision": true, 00:18:13.464 "num_allocated_clusters": 0, 00:18:13.464 "snapshot": false, 00:18:13.464 "clone": false, 00:18:13.464 "esnap_clone": false 00:18:13.464 } 00:18:13.464 } 00:18:13.464 } 00:18:13.464 ]' 00:18:13.464 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:13.464 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:13.464 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:13.464 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:13.464 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:13.464 00:02:19 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:13.464 00:02:19 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:13.464 00:02:19 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:13.725 00:02:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:13.726 00:02:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 6163454b-71e3-47cd-b2cf-12b18db97633 00:18:13.726 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6163454b-71e3-47cd-b2cf-12b18db97633 00:18:13.726 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:13.726 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:13.726 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:13.726 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6163454b-71e3-47cd-b2cf-12b18db97633 00:18:13.726 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:13.726 { 00:18:13.726 "name": "6163454b-71e3-47cd-b2cf-12b18db97633", 00:18:13.726 "aliases": [ 00:18:13.726 "lvs/nvme0n1p0" 00:18:13.726 ], 00:18:13.726 "product_name": "Logical Volume", 00:18:13.726 "block_size": 4096, 00:18:13.726 "num_blocks": 26476544, 00:18:13.726 "uuid": "6163454b-71e3-47cd-b2cf-12b18db97633", 00:18:13.726 "assigned_rate_limits": { 00:18:13.726 "rw_ios_per_sec": 0, 00:18:13.726 "rw_mbytes_per_sec": 0, 00:18:13.726 "r_mbytes_per_sec": 0, 00:18:13.726 "w_mbytes_per_sec": 0 00:18:13.726 }, 00:18:13.726 "claimed": false, 00:18:13.726 "zoned": false, 00:18:13.726 "supported_io_types": { 00:18:13.726 "read": true, 00:18:13.726 "write": true, 00:18:13.726 "unmap": true, 00:18:13.726 "flush": false, 00:18:13.726 "reset": true, 00:18:13.726 "nvme_admin": false, 00:18:13.726 "nvme_io": false, 00:18:13.726 "nvme_io_md": false, 00:18:13.726 "write_zeroes": true, 00:18:13.726 "zcopy": false, 00:18:13.726 "get_zone_info": false, 00:18:13.726 "zone_management": false, 00:18:13.726 "zone_append": false, 00:18:13.726 "compare": false, 00:18:13.726 "compare_and_write": false, 00:18:13.726 "abort": false, 00:18:13.726 "seek_hole": true, 00:18:13.726 "seek_data": true, 00:18:13.726 "copy": false, 00:18:13.726 "nvme_iov_md": false 00:18:13.726 }, 00:18:13.726 "driver_specific": { 00:18:13.726 "lvol": { 00:18:13.726 "lvol_store_uuid": "0ae7e3db-c8a3-4cf0-9470-faa269303142", 00:18:13.726 "base_bdev": "nvme0n1", 00:18:13.726 "thin_provision": true, 00:18:13.726 "num_allocated_clusters": 0, 00:18:13.726 "snapshot": false, 00:18:13.726 "clone": false, 00:18:13.726 "esnap_clone": false 00:18:13.726 } 00:18:13.726 } 00:18:13.726 } 00:18:13.726 ]' 00:18:13.726 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:13.987 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:13.987 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:13.987 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:13.987 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:13.987 00:02:20 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:13.987 00:02:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:13.987 00:02:20 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6163454b-71e3-47cd-b2cf-12b18db97633 --l2p_dram_limit 10' 00:18:13.987 00:02:20 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:13.987 00:02:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:13.987 00:02:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:13.987 00:02:20 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:13.987 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:13.987 00:02:20 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6163454b-71e3-47cd-b2cf-12b18db97633 --l2p_dram_limit 10 -c nvc0n1p0 00:18:13.987 [2024-11-19 00:02:20.662481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.987 [2024-11-19 00:02:20.662520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:13.987 [2024-11-19 00:02:20.662533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:13.987 [2024-11-19 00:02:20.662540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.987 [2024-11-19 00:02:20.662588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.987 [2024-11-19 00:02:20.662596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.987 [2024-11-19 00:02:20.662604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:13.987 [2024-11-19 00:02:20.662610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.987 [2024-11-19 00:02:20.662628] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:13.987 [2024-11-19 00:02:20.663197] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:13.987 [2024-11-19 00:02:20.663220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.987 [2024-11-19 00:02:20.663226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.987 [2024-11-19 00:02:20.663234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:18:13.987 [2024-11-19 00:02:20.663240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.987 [2024-11-19 00:02:20.663295] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 21cac7f8-48cf-499b-b792-6ff17c544639 00:18:13.987 [2024-11-19 00:02:20.664265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.987 [2024-11-19 00:02:20.664294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:13.987 [2024-11-19 00:02:20.664302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:13.987 [2024-11-19 00:02:20.664311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.987 [2024-11-19 00:02:20.669022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.987 [2024-11-19 00:02:20.669051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.987 [2024-11-19 00:02:20.669060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.679 ms 00:18:13.987 [2024-11-19 00:02:20.669067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.987 [2024-11-19 00:02:20.669141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.987 [2024-11-19 00:02:20.669150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.987 [2024-11-19 00:02:20.669156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:13.987 [2024-11-19 00:02:20.669166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.987 [2024-11-19 00:02:20.669193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.987 [2024-11-19 00:02:20.669202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:13.987 [2024-11-19 00:02:20.669209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:13.987 [2024-11-19 00:02:20.669218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.987 [2024-11-19 00:02:20.669235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.987 [2024-11-19 00:02:20.672175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.987 [2024-11-19 00:02:20.672200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.987 [2024-11-19 00:02:20.672209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:18:13.987 [2024-11-19 00:02:20.672215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.987 [2024-11-19 00:02:20.672241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.987 [2024-11-19 00:02:20.672247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:13.987 [2024-11-19 00:02:20.672254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:13.987 [2024-11-19 00:02:20.672260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.987 [2024-11-19 00:02:20.672284] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:13.987 [2024-11-19 00:02:20.672388] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:13.987 [2024-11-19 00:02:20.672405] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:13.987 [2024-11-19 00:02:20.672415] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:13.987 [2024-11-19 00:02:20.672425] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:13.987 [2024-11-19 00:02:20.672432] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:13.987 [2024-11-19 00:02:20.672439] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:13.988 [2024-11-19 00:02:20.672444] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:13.988 [2024-11-19 00:02:20.672454] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:13.988 [2024-11-19 00:02:20.672460] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:13.988 [2024-11-19 00:02:20.672467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.988 [2024-11-19 00:02:20.672474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:13.988 [2024-11-19 00:02:20.672481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:18:13.988 [2024-11-19 00:02:20.672491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.988 [2024-11-19 00:02:20.672557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.988 [2024-11-19 00:02:20.672564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:13.988 [2024-11-19 00:02:20.672571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:13.988 [2024-11-19 00:02:20.672576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.988 [2024-11-19 00:02:20.672655] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:13.988 [2024-11-19 00:02:20.672664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:13.988 [2024-11-19 00:02:20.672671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.988 [2024-11-19 00:02:20.672677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:13.988 [2024-11-19 00:02:20.672689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:13.988 [2024-11-19 00:02:20.672702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:13.988 [2024-11-19 00:02:20.672709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.988 [2024-11-19 00:02:20.672721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:13.988 [2024-11-19 00:02:20.672726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:13.988 [2024-11-19 00:02:20.672732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.988 [2024-11-19 00:02:20.672738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:13.988 [2024-11-19 00:02:20.672744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:13.988 [2024-11-19 00:02:20.672750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:13.988 [2024-11-19 00:02:20.672764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:13.988 [2024-11-19 00:02:20.672771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:13.988 [2024-11-19 00:02:20.672782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.988 [2024-11-19 00:02:20.672794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:13.988 [2024-11-19 00:02:20.672798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.988 [2024-11-19 00:02:20.672811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:13.988 [2024-11-19 00:02:20.672818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.988 [2024-11-19 00:02:20.672829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:13.988 [2024-11-19 00:02:20.672834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.988 [2024-11-19 00:02:20.672845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:13.988 [2024-11-19 00:02:20.672853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.988 [2024-11-19 00:02:20.672865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:13.988 [2024-11-19 00:02:20.672870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:13.988 [2024-11-19 00:02:20.672876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.988 [2024-11-19 00:02:20.672881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:13.988 [2024-11-19 00:02:20.672887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:13.988 [2024-11-19 00:02:20.672892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:13.988 [2024-11-19 00:02:20.672903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:13.988 [2024-11-19 00:02:20.672909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672915] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:13.988 [2024-11-19 00:02:20.672922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:13.988 [2024-11-19 00:02:20.672928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.988 [2024-11-19 00:02:20.672935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.988 [2024-11-19 00:02:20.672941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:13.988 [2024-11-19 00:02:20.672949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:13.988 [2024-11-19 00:02:20.672954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:13.988 [2024-11-19 00:02:20.672960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:13.988 [2024-11-19 00:02:20.672965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:13.988 [2024-11-19 00:02:20.672971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:13.988 [2024-11-19 00:02:20.672979] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:13.988 [2024-11-19 00:02:20.672987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.988 [2024-11-19 00:02:20.672995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:13.988 [2024-11-19 00:02:20.673002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:13.988 [2024-11-19 00:02:20.673007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:13.988 [2024-11-19 00:02:20.673014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:13.988 [2024-11-19 00:02:20.673019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:13.988 [2024-11-19 00:02:20.673026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:13.988 [2024-11-19 00:02:20.673032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:13.988 [2024-11-19 00:02:20.673038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:13.988 [2024-11-19 00:02:20.673044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:13.988 [2024-11-19 00:02:20.673051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:13.988 [2024-11-19 00:02:20.673057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:13.988 [2024-11-19 00:02:20.673063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:13.988 [2024-11-19 00:02:20.673068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:13.988 [2024-11-19 00:02:20.673076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:13.988 [2024-11-19 00:02:20.673082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:13.988 [2024-11-19 00:02:20.673090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.988 [2024-11-19 00:02:20.673095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:13.988 [2024-11-19 00:02:20.673102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:13.988 [2024-11-19 00:02:20.673108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:13.988 [2024-11-19 00:02:20.673114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:13.988 [2024-11-19 00:02:20.673120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.988 [2024-11-19 00:02:20.673136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:13.988 [2024-11-19 00:02:20.673143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:18:13.988 [2024-11-19 00:02:20.673150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.988 [2024-11-19 00:02:20.673189] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:13.988 [2024-11-19 00:02:20.673201] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:18.194 [2024-11-19 00:02:24.593329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.194 [2024-11-19 00:02:24.593422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:18.194 [2024-11-19 00:02:24.593440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3920.121 ms 00:18:18.194 [2024-11-19 00:02:24.593452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.194 [2024-11-19 00:02:24.626411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.194 [2024-11-19 00:02:24.626488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.194 [2024-11-19 00:02:24.626503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.703 ms 00:18:18.194 [2024-11-19 00:02:24.626514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.194 [2024-11-19 00:02:24.626660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.194 [2024-11-19 00:02:24.626676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:18.194 [2024-11-19 00:02:24.626686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:18.195 [2024-11-19 00:02:24.626701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.662548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.662608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.195 [2024-11-19 00:02:24.662621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.792 ms 00:18:18.195 [2024-11-19 00:02:24.662632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.662671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.662686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.195 [2024-11-19 00:02:24.662695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:18.195 [2024-11-19 00:02:24.662705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.663358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.663402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.195 [2024-11-19 00:02:24.663414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:18:18.195 [2024-11-19 00:02:24.663426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.663562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.663576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.195 [2024-11-19 00:02:24.663588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:18.195 [2024-11-19 00:02:24.663601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.681764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.681822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.195 [2024-11-19 00:02:24.681834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.140 ms 00:18:18.195 [2024-11-19 00:02:24.681845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.695374] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:18.195 [2024-11-19 00:02:24.699351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.699400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:18.195 [2024-11-19 00:02:24.699414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.412 ms 00:18:18.195 [2024-11-19 00:02:24.699424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.813788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.813860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:18.195 [2024-11-19 00:02:24.813882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.311 ms 00:18:18.195 [2024-11-19 00:02:24.813891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.814107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.814140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:18.195 [2024-11-19 00:02:24.814157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:18:18.195 [2024-11-19 00:02:24.814166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.840525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.840584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:18.195 [2024-11-19 00:02:24.840600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.292 ms 00:18:18.195 [2024-11-19 00:02:24.840609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.866504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.866556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:18.195 [2024-11-19 00:02:24.866572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.828 ms 00:18:18.195 [2024-11-19 00:02:24.866580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.195 [2024-11-19 00:02:24.867221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.195 [2024-11-19 00:02:24.867255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:18.195 [2024-11-19 00:02:24.867269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:18:18.195 [2024-11-19 00:02:24.867278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.456 [2024-11-19 00:02:24.955705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.456 [2024-11-19 00:02:24.955761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:18.456 [2024-11-19 00:02:24.955781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.357 ms 00:18:18.456 [2024-11-19 00:02:24.955790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.456 [2024-11-19 00:02:24.984773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.456 [2024-11-19 00:02:24.984828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:18.456 [2024-11-19 00:02:24.984845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.876 ms 00:18:18.456 [2024-11-19 00:02:24.984853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.456 [2024-11-19 00:02:25.011478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.456 [2024-11-19 00:02:25.011529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:18.456 [2024-11-19 00:02:25.011544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.564 ms 00:18:18.456 [2024-11-19 00:02:25.011552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.456 [2024-11-19 00:02:25.038680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.456 [2024-11-19 00:02:25.038734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:18.456 [2024-11-19 00:02:25.038749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.072 ms 00:18:18.456 [2024-11-19 00:02:25.038756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.456 [2024-11-19 00:02:25.038817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.456 [2024-11-19 00:02:25.038827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:18.456 [2024-11-19 00:02:25.038844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:18.456 [2024-11-19 00:02:25.038852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.456 [2024-11-19 00:02:25.038982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.456 [2024-11-19 00:02:25.038997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:18.456 [2024-11-19 00:02:25.039011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:18.456 [2024-11-19 00:02:25.039019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.456 [2024-11-19 00:02:25.040294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4377.254 ms, result 0 00:18:18.456 { 00:18:18.456 "name": "ftl0", 00:18:18.456 "uuid": "21cac7f8-48cf-499b-b792-6ff17c544639" 00:18:18.456 } 00:18:18.456 00:02:25 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:18.456 00:02:25 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:18.718 00:02:25 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:18.718 00:02:25 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:18.981 [2024-11-19 00:02:25.491545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.491616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:18.981 [2024-11-19 00:02:25.491633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:18.981 [2024-11-19 00:02:25.491650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.491676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:18.981 [2024-11-19 00:02:25.494711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.494756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:18.981 [2024-11-19 00:02:25.494771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:18:18.981 [2024-11-19 00:02:25.494779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.495056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.495069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:18.981 [2024-11-19 00:02:25.495086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:18:18.981 [2024-11-19 00:02:25.495095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.498381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.498420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:18.981 [2024-11-19 00:02:25.498432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:18:18.981 [2024-11-19 00:02:25.498440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.504674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.504719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:18.981 [2024-11-19 00:02:25.504737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.210 ms 00:18:18.981 [2024-11-19 00:02:25.504746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.532106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.532165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:18.981 [2024-11-19 00:02:25.532181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.267 ms 00:18:18.981 [2024-11-19 00:02:25.532190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.550660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.550718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:18.981 [2024-11-19 00:02:25.550734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.403 ms 00:18:18.981 [2024-11-19 00:02:25.550743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.550926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.550941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:18.981 [2024-11-19 00:02:25.550953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:18.981 [2024-11-19 00:02:25.550961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.578349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.578403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:18.981 [2024-11-19 00:02:25.578418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.362 ms 00:18:18.981 [2024-11-19 00:02:25.578426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.604518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.604581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:18.981 [2024-11-19 00:02:25.604596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.032 ms 00:18:18.981 [2024-11-19 00:02:25.604604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.630467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.630517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:18.981 [2024-11-19 00:02:25.630531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.800 ms 00:18:18.981 [2024-11-19 00:02:25.630539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.656490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-19 00:02:25.656542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:18.981 [2024-11-19 00:02:25.656557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.829 ms 00:18:18.981 [2024-11-19 00:02:25.656564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-19 00:02:25.656617] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:18.981 [2024-11-19 00:02:25.656634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-19 00:02:25.656761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.656995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:18.982 [2024-11-19 00:02:25.657861] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:18.982 [2024-11-19 00:02:25.657875] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cac7f8-48cf-499b-b792-6ff17c544639 00:18:18.982 [2024-11-19 00:02:25.657883] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:18.983 [2024-11-19 00:02:25.657894] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:18.983 [2024-11-19 00:02:25.657901] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:18.983 [2024-11-19 00:02:25.657915] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:18.983 [2024-11-19 00:02:25.657922] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:18.983 [2024-11-19 00:02:25.657932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:18.983 [2024-11-19 00:02:25.657942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:18.983 [2024-11-19 00:02:25.657950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:18.983 [2024-11-19 00:02:25.657957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:18.983 [2024-11-19 00:02:25.657966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.983 [2024-11-19 00:02:25.657974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:18.983 [2024-11-19 00:02:25.657985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:18:18.983 [2024-11-19 00:02:25.657992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.671845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.244 [2024-11-19 00:02:25.671894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:19.244 [2024-11-19 00:02:25.671908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.784 ms 00:18:19.244 [2024-11-19 00:02:25.671916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.672361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.244 [2024-11-19 00:02:25.672385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:19.244 [2024-11-19 00:02:25.672397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:18:19.244 [2024-11-19 00:02:25.672407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.719388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.719440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.244 [2024-11-19 00:02:25.719466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.719476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.719544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.719554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.244 [2024-11-19 00:02:25.719564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.719575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.719670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.719683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.244 [2024-11-19 00:02:25.719695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.719703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.719727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.719737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.244 [2024-11-19 00:02:25.719748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.719758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.804363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.804421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.244 [2024-11-19 00:02:25.804439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.804448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.873933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.873993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.244 [2024-11-19 00:02:25.874008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.874021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.874156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.874169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.244 [2024-11-19 00:02:25.874181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.874189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.874245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.874256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.244 [2024-11-19 00:02:25.874268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.874276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.874387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.874399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.244 [2024-11-19 00:02:25.874411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.874420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.874457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.874468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:19.244 [2024-11-19 00:02:25.874480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.874488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.874532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.874545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.244 [2024-11-19 00:02:25.874556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.874563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.874616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.244 [2024-11-19 00:02:25.874627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.244 [2024-11-19 00:02:25.874638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.244 [2024-11-19 00:02:25.874646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.244 [2024-11-19 00:02:25.874804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 383.211 ms, result 0 00:18:19.244 true 00:18:19.244 00:02:25 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74535 00:18:19.244 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74535 ']' 00:18:19.244 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74535 00:18:19.244 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:18:19.244 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.244 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74535 00:18:19.244 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.244 killing process with pid 74535 00:18:19.244 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.245 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74535' 00:18:19.245 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 74535 00:18:19.245 00:02:25 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 74535 00:18:25.835 00:02:31 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:30.045 262144+0 records in 00:18:30.045 262144+0 records out 00:18:30.045 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.42535 s, 243 MB/s 00:18:30.045 00:02:36 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:31.960 00:02:38 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:31.960 [2024-11-19 00:02:38.365768] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:31.960 [2024-11-19 00:02:38.365885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74779 ] 00:18:31.960 [2024-11-19 00:02:38.523855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.960 [2024-11-19 00:02:38.641759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.534 [2024-11-19 00:02:38.928654] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:32.534 [2024-11-19 00:02:38.928738] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:32.534 [2024-11-19 00:02:39.089163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.089224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:32.534 [2024-11-19 00:02:39.089245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:32.534 [2024-11-19 00:02:39.089254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.089309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.089320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:32.534 [2024-11-19 00:02:39.089332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:32.534 [2024-11-19 00:02:39.089340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.089360] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:32.534 [2024-11-19 00:02:39.090410] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:32.534 [2024-11-19 00:02:39.090469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.090480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:32.534 [2024-11-19 00:02:39.090491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:18:32.534 [2024-11-19 00:02:39.090500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.092524] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:32.534 [2024-11-19 00:02:39.106854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.106910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:32.534 [2024-11-19 00:02:39.106925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.334 ms 00:18:32.534 [2024-11-19 00:02:39.106934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.107017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.107028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:32.534 [2024-11-19 00:02:39.107038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:32.534 [2024-11-19 00:02:39.107045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.115348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.115392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:32.534 [2024-11-19 00:02:39.115403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.207 ms 00:18:32.534 [2024-11-19 00:02:39.115412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.115527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.115537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:32.534 [2024-11-19 00:02:39.115546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:32.534 [2024-11-19 00:02:39.115555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.115607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.115619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:32.534 [2024-11-19 00:02:39.115628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:32.534 [2024-11-19 00:02:39.115636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.115662] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:32.534 [2024-11-19 00:02:39.119682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.119726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:32.534 [2024-11-19 00:02:39.119737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.027 ms 00:18:32.534 [2024-11-19 00:02:39.119748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.119783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.534 [2024-11-19 00:02:39.119792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:32.534 [2024-11-19 00:02:39.119801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:32.534 [2024-11-19 00:02:39.119808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.534 [2024-11-19 00:02:39.119859] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:32.534 [2024-11-19 00:02:39.119883] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:32.534 [2024-11-19 00:02:39.119919] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:32.534 [2024-11-19 00:02:39.119939] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:32.534 [2024-11-19 00:02:39.120045] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:32.534 [2024-11-19 00:02:39.120060] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:32.534 [2024-11-19 00:02:39.120071] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:32.534 [2024-11-19 00:02:39.120082] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:32.534 [2024-11-19 00:02:39.120091] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120100] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:32.535 [2024-11-19 00:02:39.120108] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:32.535 [2024-11-19 00:02:39.120116] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:32.535 [2024-11-19 00:02:39.120140] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:32.535 [2024-11-19 00:02:39.120151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.535 [2024-11-19 00:02:39.120160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:32.535 [2024-11-19 00:02:39.120170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:18:32.535 [2024-11-19 00:02:39.120179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.535 [2024-11-19 00:02:39.120263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.535 [2024-11-19 00:02:39.120274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:32.535 [2024-11-19 00:02:39.120283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:32.535 [2024-11-19 00:02:39.120290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.535 [2024-11-19 00:02:39.120396] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:32.535 [2024-11-19 00:02:39.120419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:32.535 [2024-11-19 00:02:39.120429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:32.535 [2024-11-19 00:02:39.120454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:32.535 [2024-11-19 00:02:39.120481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:32.535 [2024-11-19 00:02:39.120495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:32.535 [2024-11-19 00:02:39.120504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:32.535 [2024-11-19 00:02:39.120513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:32.535 [2024-11-19 00:02:39.120520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:32.535 [2024-11-19 00:02:39.120528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:32.535 [2024-11-19 00:02:39.120543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:32.535 [2024-11-19 00:02:39.120558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:32.535 [2024-11-19 00:02:39.120579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:32.535 [2024-11-19 00:02:39.120602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:32.535 [2024-11-19 00:02:39.120623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:32.535 [2024-11-19 00:02:39.120644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:32.535 [2024-11-19 00:02:39.120664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:32.535 [2024-11-19 00:02:39.120680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:32.535 [2024-11-19 00:02:39.120686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:32.535 [2024-11-19 00:02:39.120693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:32.535 [2024-11-19 00:02:39.120699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:32.535 [2024-11-19 00:02:39.120706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:32.535 [2024-11-19 00:02:39.120712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:32.535 [2024-11-19 00:02:39.120725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:32.535 [2024-11-19 00:02:39.120733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120742] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:32.535 [2024-11-19 00:02:39.120751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:32.535 [2024-11-19 00:02:39.120759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.535 [2024-11-19 00:02:39.120775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:32.535 [2024-11-19 00:02:39.120782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:32.535 [2024-11-19 00:02:39.120789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:32.535 [2024-11-19 00:02:39.120798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:32.535 [2024-11-19 00:02:39.120805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:32.535 [2024-11-19 00:02:39.120812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:32.535 [2024-11-19 00:02:39.120820] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:32.535 [2024-11-19 00:02:39.120831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:32.535 [2024-11-19 00:02:39.120840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:32.535 [2024-11-19 00:02:39.120850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:32.535 [2024-11-19 00:02:39.120857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:32.535 [2024-11-19 00:02:39.120866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:32.535 [2024-11-19 00:02:39.120874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:32.535 [2024-11-19 00:02:39.120881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:32.535 [2024-11-19 00:02:39.120888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:32.535 [2024-11-19 00:02:39.120896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:32.535 [2024-11-19 00:02:39.120903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:32.535 [2024-11-19 00:02:39.120910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:32.535 [2024-11-19 00:02:39.120917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:32.535 [2024-11-19 00:02:39.120924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:32.535 [2024-11-19 00:02:39.120931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:32.535 [2024-11-19 00:02:39.120940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:32.535 [2024-11-19 00:02:39.120948] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:32.535 [2024-11-19 00:02:39.120959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:32.535 [2024-11-19 00:02:39.120969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:32.535 [2024-11-19 00:02:39.120976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:32.535 [2024-11-19 00:02:39.120984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:32.535 [2024-11-19 00:02:39.120991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:32.535 [2024-11-19 00:02:39.121001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.535 [2024-11-19 00:02:39.121010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:32.535 [2024-11-19 00:02:39.121018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:18:32.535 [2024-11-19 00:02:39.121026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.535 [2024-11-19 00:02:39.153175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.535 [2024-11-19 00:02:39.153225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.535 [2024-11-19 00:02:39.153237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.103 ms 00:18:32.535 [2024-11-19 00:02:39.153245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.535 [2024-11-19 00:02:39.153341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.535 [2024-11-19 00:02:39.153350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:32.535 [2024-11-19 00:02:39.153359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:32.535 [2024-11-19 00:02:39.153367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.536 [2024-11-19 00:02:39.199824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.536 [2024-11-19 00:02:39.199879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.536 [2024-11-19 00:02:39.199892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.398 ms 00:18:32.536 [2024-11-19 00:02:39.199901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.536 [2024-11-19 00:02:39.199950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.536 [2024-11-19 00:02:39.199960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.536 [2024-11-19 00:02:39.199970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:32.536 [2024-11-19 00:02:39.199981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.536 [2024-11-19 00:02:39.200600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.536 [2024-11-19 00:02:39.200637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.536 [2024-11-19 00:02:39.200649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:18:32.536 [2024-11-19 00:02:39.200657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.536 [2024-11-19 00:02:39.200815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.536 [2024-11-19 00:02:39.200828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.536 [2024-11-19 00:02:39.200837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:18:32.536 [2024-11-19 00:02:39.200851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.536 [2024-11-19 00:02:39.216478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.536 [2024-11-19 00:02:39.216522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.536 [2024-11-19 00:02:39.216536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.606 ms 00:18:32.536 [2024-11-19 00:02:39.216545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.230789] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:32.796 [2024-11-19 00:02:39.230843] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:32.796 [2024-11-19 00:02:39.230857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.230866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:32.796 [2024-11-19 00:02:39.230877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.201 ms 00:18:32.796 [2024-11-19 00:02:39.230884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.256537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.256587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:32.796 [2024-11-19 00:02:39.256607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.598 ms 00:18:32.796 [2024-11-19 00:02:39.256616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.269488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.269545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:32.796 [2024-11-19 00:02:39.269557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.818 ms 00:18:32.796 [2024-11-19 00:02:39.269565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.282089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.283107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:32.796 [2024-11-19 00:02:39.283162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.475 ms 00:18:32.796 [2024-11-19 00:02:39.283173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.283848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.283891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:32.796 [2024-11-19 00:02:39.283902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:18:32.796 [2024-11-19 00:02:39.283911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.349597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.349654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:32.796 [2024-11-19 00:02:39.349668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.662 ms 00:18:32.796 [2024-11-19 00:02:39.349684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.360825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:32.796 [2024-11-19 00:02:39.363802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.363847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:32.796 [2024-11-19 00:02:39.363860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.062 ms 00:18:32.796 [2024-11-19 00:02:39.363870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.363953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.363965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:32.796 [2024-11-19 00:02:39.363975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:32.796 [2024-11-19 00:02:39.363984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.364069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.364082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:32.796 [2024-11-19 00:02:39.364091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:32.796 [2024-11-19 00:02:39.364099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.364138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.364148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:32.796 [2024-11-19 00:02:39.364157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:32.796 [2024-11-19 00:02:39.364167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.796 [2024-11-19 00:02:39.364203] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:32.796 [2024-11-19 00:02:39.364216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.796 [2024-11-19 00:02:39.364226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:32.797 [2024-11-19 00:02:39.364235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:32.797 [2024-11-19 00:02:39.364244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.797 [2024-11-19 00:02:39.389736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.797 [2024-11-19 00:02:39.389789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:32.797 [2024-11-19 00:02:39.389803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.472 ms 00:18:32.797 [2024-11-19 00:02:39.389812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.797 [2024-11-19 00:02:39.389906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.797 [2024-11-19 00:02:39.389917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:32.797 [2024-11-19 00:02:39.389926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:32.797 [2024-11-19 00:02:39.389934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.797 [2024-11-19 00:02:39.391786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.128 ms, result 0 00:18:33.740  [2024-11-19T00:02:41.819Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-19T00:02:42.762Z] Copying: 33/1024 [MB] (15 MBps) [2024-11-19T00:02:43.707Z] Copying: 55/1024 [MB] (22 MBps) [2024-11-19T00:02:44.651Z] Copying: 80/1024 [MB] (24 MBps) [2024-11-19T00:02:45.596Z] Copying: 97/1024 [MB] (16 MBps) [2024-11-19T00:02:46.539Z] Copying: 112/1024 [MB] (15 MBps) [2024-11-19T00:02:47.483Z] Copying: 150/1024 [MB] (37 MBps) [2024-11-19T00:02:48.427Z] Copying: 194/1024 [MB] (44 MBps) [2024-11-19T00:02:49.824Z] Copying: 237/1024 [MB] (43 MBps) [2024-11-19T00:02:50.769Z] Copying: 281/1024 [MB] (43 MBps) [2024-11-19T00:02:51.712Z] Copying: 297/1024 [MB] (16 MBps) [2024-11-19T00:02:52.730Z] Copying: 308/1024 [MB] (10 MBps) [2024-11-19T00:02:53.683Z] Copying: 347/1024 [MB] (38 MBps) [2024-11-19T00:02:54.628Z] Copying: 372/1024 [MB] (25 MBps) [2024-11-19T00:02:55.570Z] Copying: 402/1024 [MB] (30 MBps) [2024-11-19T00:02:56.515Z] Copying: 441/1024 [MB] (39 MBps) [2024-11-19T00:02:57.458Z] Copying: 464/1024 [MB] (22 MBps) [2024-11-19T00:02:58.846Z] Copying: 477/1024 [MB] (12 MBps) [2024-11-19T00:02:59.417Z] Copying: 487/1024 [MB] (10 MBps) [2024-11-19T00:03:00.798Z] Copying: 499/1024 [MB] (11 MBps) [2024-11-19T00:03:01.742Z] Copying: 523/1024 [MB] (24 MBps) [2024-11-19T00:03:02.685Z] Copying: 544/1024 [MB] (20 MBps) [2024-11-19T00:03:03.626Z] Copying: 562/1024 [MB] (18 MBps) [2024-11-19T00:03:04.570Z] Copying: 584/1024 [MB] (22 MBps) [2024-11-19T00:03:05.514Z] Copying: 602/1024 [MB] (18 MBps) [2024-11-19T00:03:06.458Z] Copying: 620/1024 [MB] (17 MBps) [2024-11-19T00:03:07.846Z] Copying: 631/1024 [MB] (10 MBps) [2024-11-19T00:03:08.419Z] Copying: 642/1024 [MB] (11 MBps) [2024-11-19T00:03:09.815Z] Copying: 660/1024 [MB] (17 MBps) [2024-11-19T00:03:10.758Z] Copying: 680/1024 [MB] (20 MBps) [2024-11-19T00:03:11.702Z] Copying: 694/1024 [MB] (13 MBps) [2024-11-19T00:03:12.646Z] Copying: 710/1024 [MB] (16 MBps) [2024-11-19T00:03:13.591Z] Copying: 721/1024 [MB] (10 MBps) [2024-11-19T00:03:14.536Z] Copying: 731/1024 [MB] (10 MBps) [2024-11-19T00:03:15.480Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-19T00:03:16.425Z] Copying: 780/1024 [MB] (38 MBps) [2024-11-19T00:03:17.811Z] Copying: 794/1024 [MB] (13 MBps) [2024-11-19T00:03:18.754Z] Copying: 812/1024 [MB] (17 MBps) [2024-11-19T00:03:19.418Z] Copying: 823/1024 [MB] (11 MBps) [2024-11-19T00:03:20.807Z] Copying: 839/1024 [MB] (15 MBps) [2024-11-19T00:03:21.751Z] Copying: 856/1024 [MB] (17 MBps) [2024-11-19T00:03:22.735Z] Copying: 869/1024 [MB] (12 MBps) [2024-11-19T00:03:23.677Z] Copying: 886/1024 [MB] (17 MBps) [2024-11-19T00:03:24.621Z] Copying: 906/1024 [MB] (19 MBps) [2024-11-19T00:03:25.565Z] Copying: 922/1024 [MB] (16 MBps) [2024-11-19T00:03:26.511Z] Copying: 939/1024 [MB] (17 MBps) [2024-11-19T00:03:27.456Z] Copying: 954/1024 [MB] (14 MBps) [2024-11-19T00:03:28.845Z] Copying: 977/1024 [MB] (22 MBps) [2024-11-19T00:03:29.420Z] Copying: 993/1024 [MB] (16 MBps) [2024-11-19T00:03:30.809Z] Copying: 1004/1024 [MB] (10 MBps) [2024-11-19T00:03:30.809Z] Copying: 1018/1024 [MB] (14 MBps) [2024-11-19T00:03:30.809Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-19 00:03:30.729477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.117 [2024-11-19 00:03:30.729510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:24.117 [2024-11-19 00:03:30.729520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:24.118 [2024-11-19 00:03:30.729527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.118 [2024-11-19 00:03:30.729543] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:24.118 [2024-11-19 00:03:30.731665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.118 [2024-11-19 00:03:30.731691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:24.118 [2024-11-19 00:03:30.731700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:19:24.118 [2024-11-19 00:03:30.731706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.118 [2024-11-19 00:03:30.733058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.118 [2024-11-19 00:03:30.733085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:24.118 [2024-11-19 00:03:30.733093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:19:24.118 [2024-11-19 00:03:30.733099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.118 [2024-11-19 00:03:30.746650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.118 [2024-11-19 00:03:30.746677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:24.118 [2024-11-19 00:03:30.746685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.539 ms 00:19:24.118 [2024-11-19 00:03:30.746691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.118 [2024-11-19 00:03:30.751479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.118 [2024-11-19 00:03:30.751508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:24.118 [2024-11-19 00:03:30.751516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.766 ms 00:19:24.118 [2024-11-19 00:03:30.751522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.118 [2024-11-19 00:03:30.769920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.118 [2024-11-19 00:03:30.769948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:24.118 [2024-11-19 00:03:30.769958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.362 ms 00:19:24.118 [2024-11-19 00:03:30.769964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.118 [2024-11-19 00:03:30.781320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.118 [2024-11-19 00:03:30.781347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:24.118 [2024-11-19 00:03:30.781355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.332 ms 00:19:24.118 [2024-11-19 00:03:30.781362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.118 [2024-11-19 00:03:30.781448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.118 [2024-11-19 00:03:30.781455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:24.118 [2024-11-19 00:03:30.781464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:24.118 [2024-11-19 00:03:30.781470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.118 [2024-11-19 00:03:30.799085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.118 [2024-11-19 00:03:30.799109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:24.118 [2024-11-19 00:03:30.799118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.604 ms 00:19:24.118 [2024-11-19 00:03:30.799135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.378 [2024-11-19 00:03:30.816712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.378 [2024-11-19 00:03:30.816738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:24.378 [2024-11-19 00:03:30.816753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.553 ms 00:19:24.378 [2024-11-19 00:03:30.816758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.378 [2024-11-19 00:03:30.833782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.378 [2024-11-19 00:03:30.833807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:24.378 [2024-11-19 00:03:30.833815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.998 ms 00:19:24.378 [2024-11-19 00:03:30.833821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.378 [2024-11-19 00:03:30.850529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.378 [2024-11-19 00:03:30.850554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:24.378 [2024-11-19 00:03:30.850562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.668 ms 00:19:24.378 [2024-11-19 00:03:30.850567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.378 [2024-11-19 00:03:30.850590] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:24.378 [2024-11-19 00:03:30.850601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:24.378 [2024-11-19 00:03:30.850609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:24.378 [2024-11-19 00:03:30.850615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:24.378 [2024-11-19 00:03:30.850620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.850998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.851003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.851008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.851014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.851019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.851025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.851030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:24.379 [2024-11-19 00:03:30.851035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:24.380 [2024-11-19 00:03:30.851185] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:24.380 [2024-11-19 00:03:30.851194] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cac7f8-48cf-499b-b792-6ff17c544639 00:19:24.380 [2024-11-19 00:03:30.851200] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:24.380 [2024-11-19 00:03:30.851207] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:24.380 [2024-11-19 00:03:30.851212] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:24.380 [2024-11-19 00:03:30.851218] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:24.380 [2024-11-19 00:03:30.851223] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:24.380 [2024-11-19 00:03:30.851229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:24.380 [2024-11-19 00:03:30.851235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:24.380 [2024-11-19 00:03:30.851244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:24.380 [2024-11-19 00:03:30.851248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:24.380 [2024-11-19 00:03:30.851254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.380 [2024-11-19 00:03:30.851261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:24.380 [2024-11-19 00:03:30.851267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:19:24.380 [2024-11-19 00:03:30.851272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.860563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.380 [2024-11-19 00:03:30.860587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:24.380 [2024-11-19 00:03:30.860594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.279 ms 00:19:24.380 [2024-11-19 00:03:30.860600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.860864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.380 [2024-11-19 00:03:30.860877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:24.380 [2024-11-19 00:03:30.860884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:19:24.380 [2024-11-19 00:03:30.860890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.886252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.886278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.380 [2024-11-19 00:03:30.886286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.380 [2024-11-19 00:03:30.886292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.886331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.886338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.380 [2024-11-19 00:03:30.886344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.380 [2024-11-19 00:03:30.886350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.886389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.886396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.380 [2024-11-19 00:03:30.886401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.380 [2024-11-19 00:03:30.886407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.886418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.886424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.380 [2024-11-19 00:03:30.886430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.380 [2024-11-19 00:03:30.886436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.945421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.945453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.380 [2024-11-19 00:03:30.945463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.380 [2024-11-19 00:03:30.945468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.993271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.993304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.380 [2024-11-19 00:03:30.993313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.380 [2024-11-19 00:03:30.993320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.993367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.993377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.380 [2024-11-19 00:03:30.993384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.380 [2024-11-19 00:03:30.993389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.993415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.993422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.380 [2024-11-19 00:03:30.993428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.380 [2024-11-19 00:03:30.993433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.993497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.993507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.380 [2024-11-19 00:03:30.993513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.380 [2024-11-19 00:03:30.993519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.380 [2024-11-19 00:03:30.993540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.380 [2024-11-19 00:03:30.993547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:24.380 [2024-11-19 00:03:30.993552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.381 [2024-11-19 00:03:30.993558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.381 [2024-11-19 00:03:30.993584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.381 [2024-11-19 00:03:30.993590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.381 [2024-11-19 00:03:30.993598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.381 [2024-11-19 00:03:30.993604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.381 [2024-11-19 00:03:30.993635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.381 [2024-11-19 00:03:30.993642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.381 [2024-11-19 00:03:30.993648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.381 [2024-11-19 00:03:30.993653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.381 [2024-11-19 00:03:30.993740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 264.240 ms, result 0 00:19:24.953 00:19:24.953 00:19:24.953 00:03:31 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:25.214 [2024-11-19 00:03:31.645723] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:25.214 [2024-11-19 00:03:31.645850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75330 ] 00:19:25.214 [2024-11-19 00:03:31.801644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.214 [2024-11-19 00:03:31.877749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.475 [2024-11-19 00:03:32.082452] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:25.475 [2024-11-19 00:03:32.082499] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:25.738 [2024-11-19 00:03:32.229836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.229881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:25.738 [2024-11-19 00:03:32.229898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:25.738 [2024-11-19 00:03:32.229906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.738 [2024-11-19 00:03:32.229951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.229961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.738 [2024-11-19 00:03:32.229971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:25.738 [2024-11-19 00:03:32.229979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.738 [2024-11-19 00:03:32.229995] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:25.738 [2024-11-19 00:03:32.230667] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:25.738 [2024-11-19 00:03:32.230691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.230700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.738 [2024-11-19 00:03:32.230708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:19:25.738 [2024-11-19 00:03:32.230716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.738 [2024-11-19 00:03:32.231796] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:25.738 [2024-11-19 00:03:32.244248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.244283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:25.738 [2024-11-19 00:03:32.244295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.453 ms 00:19:25.738 [2024-11-19 00:03:32.244302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.738 [2024-11-19 00:03:32.244356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.244365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:25.738 [2024-11-19 00:03:32.244373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:25.738 [2024-11-19 00:03:32.244380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.738 [2024-11-19 00:03:32.249416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.249445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.738 [2024-11-19 00:03:32.249454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.989 ms 00:19:25.738 [2024-11-19 00:03:32.249461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.738 [2024-11-19 00:03:32.249530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.249538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.738 [2024-11-19 00:03:32.249546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:25.738 [2024-11-19 00:03:32.249553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.738 [2024-11-19 00:03:32.249603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.249613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:25.738 [2024-11-19 00:03:32.249622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:25.738 [2024-11-19 00:03:32.249633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.738 [2024-11-19 00:03:32.249653] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:25.738 [2024-11-19 00:03:32.253021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.253049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.738 [2024-11-19 00:03:32.253058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.373 ms 00:19:25.738 [2024-11-19 00:03:32.253068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.738 [2024-11-19 00:03:32.253094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.738 [2024-11-19 00:03:32.253102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:25.739 [2024-11-19 00:03:32.253110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:25.739 [2024-11-19 00:03:32.253117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.739 [2024-11-19 00:03:32.253144] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:25.739 [2024-11-19 00:03:32.253162] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:25.739 [2024-11-19 00:03:32.253196] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:25.739 [2024-11-19 00:03:32.253213] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:25.739 [2024-11-19 00:03:32.253316] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:25.739 [2024-11-19 00:03:32.253327] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:25.739 [2024-11-19 00:03:32.253338] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:25.739 [2024-11-19 00:03:32.253347] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253356] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253364] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:25.739 [2024-11-19 00:03:32.253372] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:25.739 [2024-11-19 00:03:32.253379] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:25.739 [2024-11-19 00:03:32.253386] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:25.739 [2024-11-19 00:03:32.253396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.739 [2024-11-19 00:03:32.253403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:25.739 [2024-11-19 00:03:32.253411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:19:25.739 [2024-11-19 00:03:32.253418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.739 [2024-11-19 00:03:32.253499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.739 [2024-11-19 00:03:32.253508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:25.739 [2024-11-19 00:03:32.253515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:25.739 [2024-11-19 00:03:32.253521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.739 [2024-11-19 00:03:32.253632] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:25.739 [2024-11-19 00:03:32.253651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:25.739 [2024-11-19 00:03:32.253660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:25.739 [2024-11-19 00:03:32.253683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:25.739 [2024-11-19 00:03:32.253705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.739 [2024-11-19 00:03:32.253718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:25.739 [2024-11-19 00:03:32.253726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:25.739 [2024-11-19 00:03:32.253733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.739 [2024-11-19 00:03:32.253740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:25.739 [2024-11-19 00:03:32.253747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:25.739 [2024-11-19 00:03:32.253757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:25.739 [2024-11-19 00:03:32.253772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:25.739 [2024-11-19 00:03:32.253791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:25.739 [2024-11-19 00:03:32.253809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:25.739 [2024-11-19 00:03:32.253829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:25.739 [2024-11-19 00:03:32.253847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:25.739 [2024-11-19 00:03:32.253865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.739 [2024-11-19 00:03:32.253878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:25.739 [2024-11-19 00:03:32.253885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:25.739 [2024-11-19 00:03:32.253891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.739 [2024-11-19 00:03:32.253898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:25.739 [2024-11-19 00:03:32.253904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:25.739 [2024-11-19 00:03:32.253910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:25.739 [2024-11-19 00:03:32.253922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:25.739 [2024-11-19 00:03:32.253928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253936] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:25.739 [2024-11-19 00:03:32.253943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:25.739 [2024-11-19 00:03:32.253950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.739 [2024-11-19 00:03:32.253958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.739 [2024-11-19 00:03:32.253965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:25.739 [2024-11-19 00:03:32.253971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:25.739 [2024-11-19 00:03:32.253977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:25.739 [2024-11-19 00:03:32.253984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:25.739 [2024-11-19 00:03:32.253990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:25.739 [2024-11-19 00:03:32.253997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:25.739 [2024-11-19 00:03:32.254004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:25.739 [2024-11-19 00:03:32.254018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.739 [2024-11-19 00:03:32.254026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:25.739 [2024-11-19 00:03:32.254033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:25.739 [2024-11-19 00:03:32.254040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:25.739 [2024-11-19 00:03:32.254046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:25.739 [2024-11-19 00:03:32.254053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:25.739 [2024-11-19 00:03:32.254060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:25.739 [2024-11-19 00:03:32.254068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:25.739 [2024-11-19 00:03:32.254075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:25.739 [2024-11-19 00:03:32.254082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:25.739 [2024-11-19 00:03:32.254089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:25.739 [2024-11-19 00:03:32.254095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:25.739 [2024-11-19 00:03:32.254102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:25.739 [2024-11-19 00:03:32.254109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:25.739 [2024-11-19 00:03:32.254116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:25.739 [2024-11-19 00:03:32.254135] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:25.739 [2024-11-19 00:03:32.254145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.739 [2024-11-19 00:03:32.254152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:25.739 [2024-11-19 00:03:32.254159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:25.739 [2024-11-19 00:03:32.254167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:25.740 [2024-11-19 00:03:32.254173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:25.740 [2024-11-19 00:03:32.254184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.254192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:25.740 [2024-11-19 00:03:32.254200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:19:25.740 [2024-11-19 00:03:32.254207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.280241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.280273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.740 [2024-11-19 00:03:32.280284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.994 ms 00:19:25.740 [2024-11-19 00:03:32.280291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.280371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.280379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:25.740 [2024-11-19 00:03:32.280387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:25.740 [2024-11-19 00:03:32.280394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.321901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.321943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.740 [2024-11-19 00:03:32.321955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.461 ms 00:19:25.740 [2024-11-19 00:03:32.321963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.322004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.322013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.740 [2024-11-19 00:03:32.322023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:25.740 [2024-11-19 00:03:32.322033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.322444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.322475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.740 [2024-11-19 00:03:32.322485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:19:25.740 [2024-11-19 00:03:32.322492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.322617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.322626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.740 [2024-11-19 00:03:32.322635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:25.740 [2024-11-19 00:03:32.322646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.336053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.336084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.740 [2024-11-19 00:03:32.336097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.386 ms 00:19:25.740 [2024-11-19 00:03:32.336105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.349079] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:25.740 [2024-11-19 00:03:32.349117] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:25.740 [2024-11-19 00:03:32.349137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.349145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:25.740 [2024-11-19 00:03:32.349154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.924 ms 00:19:25.740 [2024-11-19 00:03:32.349161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.374137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.374197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:25.740 [2024-11-19 00:03:32.374208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.933 ms 00:19:25.740 [2024-11-19 00:03:32.374216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.386741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.386781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:25.740 [2024-11-19 00:03:32.386792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.480 ms 00:19:25.740 [2024-11-19 00:03:32.386800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.398959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.399005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:25.740 [2024-11-19 00:03:32.399016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.117 ms 00:19:25.740 [2024-11-19 00:03:32.399023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.740 [2024-11-19 00:03:32.399704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.740 [2024-11-19 00:03:32.399740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:25.740 [2024-11-19 00:03:32.399750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:19:25.740 [2024-11-19 00:03:32.399761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.002 [2024-11-19 00:03:32.463088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.002 [2024-11-19 00:03:32.463165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:26.002 [2024-11-19 00:03:32.463187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.307 ms 00:19:26.002 [2024-11-19 00:03:32.463196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.002 [2024-11-19 00:03:32.474440] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:26.002 [2024-11-19 00:03:32.477263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.002 [2024-11-19 00:03:32.477304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.002 [2024-11-19 00:03:32.477316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.008 ms 00:19:26.002 [2024-11-19 00:03:32.477326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.002 [2024-11-19 00:03:32.477410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.002 [2024-11-19 00:03:32.477422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:26.002 [2024-11-19 00:03:32.477432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:26.002 [2024-11-19 00:03:32.477443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.002 [2024-11-19 00:03:32.477515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.002 [2024-11-19 00:03:32.477528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.002 [2024-11-19 00:03:32.477538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:26.002 [2024-11-19 00:03:32.477545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.002 [2024-11-19 00:03:32.477566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.002 [2024-11-19 00:03:32.477574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.002 [2024-11-19 00:03:32.477583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:26.002 [2024-11-19 00:03:32.477590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.002 [2024-11-19 00:03:32.477625] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:26.002 [2024-11-19 00:03:32.477639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.002 [2024-11-19 00:03:32.477648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:26.002 [2024-11-19 00:03:32.477657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:26.002 [2024-11-19 00:03:32.477665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.002 [2024-11-19 00:03:32.503007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.003 [2024-11-19 00:03:32.503058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.003 [2024-11-19 00:03:32.503071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.325 ms 00:19:26.003 [2024-11-19 00:03:32.503085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.003 [2024-11-19 00:03:32.503185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.003 [2024-11-19 00:03:32.503197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.003 [2024-11-19 00:03:32.503206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:26.003 [2024-11-19 00:03:32.503214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.003 [2024-11-19 00:03:32.505077] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.754 ms, result 0 00:19:27.389  [2024-11-19T00:03:35.026Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-19T00:03:35.971Z] Copying: 26/1024 [MB] (10 MBps) [2024-11-19T00:03:36.915Z] Copying: 37/1024 [MB] (10 MBps) [2024-11-19T00:03:37.860Z] Copying: 47/1024 [MB] (10 MBps) [2024-11-19T00:03:38.804Z] Copying: 60/1024 [MB] (12 MBps) [2024-11-19T00:03:39.754Z] Copying: 73/1024 [MB] (13 MBps) [2024-11-19T00:03:40.697Z] Copying: 90/1024 [MB] (16 MBps) [2024-11-19T00:03:42.083Z] Copying: 104/1024 [MB] (14 MBps) [2024-11-19T00:03:43.027Z] Copying: 121/1024 [MB] (17 MBps) [2024-11-19T00:03:43.973Z] Copying: 142/1024 [MB] (20 MBps) [2024-11-19T00:03:44.917Z] Copying: 157/1024 [MB] (15 MBps) [2024-11-19T00:03:45.860Z] Copying: 174/1024 [MB] (16 MBps) [2024-11-19T00:03:46.805Z] Copying: 194/1024 [MB] (20 MBps) [2024-11-19T00:03:47.748Z] Copying: 209/1024 [MB] (14 MBps) [2024-11-19T00:03:49.135Z] Copying: 227/1024 [MB] (18 MBps) [2024-11-19T00:03:49.708Z] Copying: 244/1024 [MB] (16 MBps) [2024-11-19T00:03:50.706Z] Copying: 259/1024 [MB] (14 MBps) [2024-11-19T00:03:52.090Z] Copying: 272/1024 [MB] (13 MBps) [2024-11-19T00:03:53.030Z] Copying: 283/1024 [MB] (10 MBps) [2024-11-19T00:03:53.972Z] Copying: 301/1024 [MB] (17 MBps) [2024-11-19T00:03:54.911Z] Copying: 314/1024 [MB] (13 MBps) [2024-11-19T00:03:55.850Z] Copying: 326/1024 [MB] (11 MBps) [2024-11-19T00:03:56.791Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-19T00:03:57.727Z] Copying: 347/1024 [MB] (10 MBps) [2024-11-19T00:03:59.108Z] Copying: 358/1024 [MB] (10 MBps) [2024-11-19T00:04:00.045Z] Copying: 369/1024 [MB] (10 MBps) [2024-11-19T00:04:00.984Z] Copying: 379/1024 [MB] (10 MBps) [2024-11-19T00:04:01.923Z] Copying: 392/1024 [MB] (12 MBps) [2024-11-19T00:04:02.862Z] Copying: 414/1024 [MB] (21 MBps) [2024-11-19T00:04:03.802Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-19T00:04:04.739Z] Copying: 437/1024 [MB] (11 MBps) [2024-11-19T00:04:06.121Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-19T00:04:07.061Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-19T00:04:07.998Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-19T00:04:08.939Z] Copying: 480/1024 [MB] (10 MBps) [2024-11-19T00:04:09.877Z] Copying: 492/1024 [MB] (11 MBps) [2024-11-19T00:04:10.815Z] Copying: 502/1024 [MB] (10 MBps) [2024-11-19T00:04:11.755Z] Copying: 513/1024 [MB] (10 MBps) [2024-11-19T00:04:12.697Z] Copying: 525/1024 [MB] (11 MBps) [2024-11-19T00:04:14.081Z] Copying: 539/1024 [MB] (14 MBps) [2024-11-19T00:04:15.024Z] Copying: 550/1024 [MB] (11 MBps) [2024-11-19T00:04:15.965Z] Copying: 561/1024 [MB] (10 MBps) [2024-11-19T00:04:16.904Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-19T00:04:17.845Z] Copying: 582/1024 [MB] (10 MBps) [2024-11-19T00:04:18.786Z] Copying: 601/1024 [MB] (18 MBps) [2024-11-19T00:04:19.797Z] Copying: 616/1024 [MB] (14 MBps) [2024-11-19T00:04:20.742Z] Copying: 639/1024 [MB] (22 MBps) [2024-11-19T00:04:22.129Z] Copying: 656/1024 [MB] (16 MBps) [2024-11-19T00:04:22.702Z] Copying: 670/1024 [MB] (14 MBps) [2024-11-19T00:04:24.088Z] Copying: 690/1024 [MB] (19 MBps) [2024-11-19T00:04:25.032Z] Copying: 707/1024 [MB] (17 MBps) [2024-11-19T00:04:25.976Z] Copying: 726/1024 [MB] (19 MBps) [2024-11-19T00:04:26.919Z] Copying: 747/1024 [MB] (20 MBps) [2024-11-19T00:04:27.862Z] Copying: 767/1024 [MB] (20 MBps) [2024-11-19T00:04:28.805Z] Copying: 788/1024 [MB] (20 MBps) [2024-11-19T00:04:29.749Z] Copying: 808/1024 [MB] (20 MBps) [2024-11-19T00:04:31.133Z] Copying: 823/1024 [MB] (15 MBps) [2024-11-19T00:04:31.706Z] Copying: 844/1024 [MB] (20 MBps) [2024-11-19T00:04:33.093Z] Copying: 864/1024 [MB] (20 MBps) [2024-11-19T00:04:34.036Z] Copying: 875/1024 [MB] (10 MBps) [2024-11-19T00:04:34.981Z] Copying: 889/1024 [MB] (14 MBps) [2024-11-19T00:04:35.925Z] Copying: 910/1024 [MB] (20 MBps) [2024-11-19T00:04:36.868Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-19T00:04:37.810Z] Copying: 936/1024 [MB] (15 MBps) [2024-11-19T00:04:38.754Z] Copying: 954/1024 [MB] (17 MBps) [2024-11-19T00:04:39.696Z] Copying: 964/1024 [MB] (10 MBps) [2024-11-19T00:04:41.082Z] Copying: 975/1024 [MB] (10 MBps) [2024-11-19T00:04:42.025Z] Copying: 985/1024 [MB] (10 MBps) [2024-11-19T00:04:42.969Z] Copying: 996/1024 [MB] (10 MBps) [2024-11-19T00:04:43.914Z] Copying: 1006/1024 [MB] (10 MBps) [2024-11-19T00:04:44.487Z] Copying: 1017/1024 [MB] (10 MBps) [2024-11-19T00:04:44.487Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-19 00:04:44.361289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.361377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.795 [2024-11-19 00:04:44.361398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:37.795 [2024-11-19 00:04:44.361411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-19 00:04:44.361444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.795 [2024-11-19 00:04:44.365304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.365356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.795 [2024-11-19 00:04:44.365378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.839 ms 00:20:37.795 [2024-11-19 00:04:44.365388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-19 00:04:44.365640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.365653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.795 [2024-11-19 00:04:44.365663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:20:37.795 [2024-11-19 00:04:44.365673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-19 00:04:44.369575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.369601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.795 [2024-11-19 00:04:44.369612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.887 ms 00:20:37.795 [2024-11-19 00:04:44.369621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-19 00:04:44.377544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.377596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.795 [2024-11-19 00:04:44.377608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.899 ms 00:20:37.795 [2024-11-19 00:04:44.377617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-19 00:04:44.405188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.405247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.795 [2024-11-19 00:04:44.405260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.504 ms 00:20:37.795 [2024-11-19 00:04:44.405268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-19 00:04:44.422050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.422107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.795 [2024-11-19 00:04:44.422138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.728 ms 00:20:37.795 [2024-11-19 00:04:44.422148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-19 00:04:44.422295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.422315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.795 [2024-11-19 00:04:44.422325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:37.795 [2024-11-19 00:04:44.422333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-19 00:04:44.449681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.449735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.795 [2024-11-19 00:04:44.449747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.332 ms 00:20:37.795 [2024-11-19 00:04:44.449755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-19 00:04:44.475954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-19 00:04:44.476020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.795 [2024-11-19 00:04:44.476032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.151 ms 00:20:37.795 [2024-11-19 00:04:44.476040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.058 [2024-11-19 00:04:44.501850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.058 [2024-11-19 00:04:44.501899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:38.058 [2024-11-19 00:04:44.501911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.761 ms 00:20:38.058 [2024-11-19 00:04:44.501918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.058 [2024-11-19 00:04:44.527448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.058 [2024-11-19 00:04:44.527511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:38.058 [2024-11-19 00:04:44.527523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.436 ms 00:20:38.058 [2024-11-19 00:04:44.527531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.058 [2024-11-19 00:04:44.527579] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:38.058 [2024-11-19 00:04:44.527595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:38.058 [2024-11-19 00:04:44.527832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.527999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:38.059 [2024-11-19 00:04:44.528446] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:38.059 [2024-11-19 00:04:44.528458] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cac7f8-48cf-499b-b792-6ff17c544639 00:20:38.059 [2024-11-19 00:04:44.528468] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:38.059 [2024-11-19 00:04:44.528476] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:38.059 [2024-11-19 00:04:44.528484] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:38.059 [2024-11-19 00:04:44.528492] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:38.059 [2024-11-19 00:04:44.528499] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:38.059 [2024-11-19 00:04:44.528511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:38.059 [2024-11-19 00:04:44.528527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:38.059 [2024-11-19 00:04:44.528534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:38.059 [2024-11-19 00:04:44.528540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:38.059 [2024-11-19 00:04:44.528548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.059 [2024-11-19 00:04:44.528555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:38.059 [2024-11-19 00:04:44.528564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:20:38.059 [2024-11-19 00:04:44.528572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.059 [2024-11-19 00:04:44.542403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.059 [2024-11-19 00:04:44.542449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:38.059 [2024-11-19 00:04:44.542460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.795 ms 00:20:38.059 [2024-11-19 00:04:44.542468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.059 [2024-11-19 00:04:44.542855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.059 [2024-11-19 00:04:44.542872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:38.059 [2024-11-19 00:04:44.542883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:20:38.060 [2024-11-19 00:04:44.542898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.579771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.579826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.060 [2024-11-19 00:04:44.579839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.579848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.579919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.579929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.060 [2024-11-19 00:04:44.579939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.579955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.580043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.580056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.060 [2024-11-19 00:04:44.580065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.580073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.580090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.580098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.060 [2024-11-19 00:04:44.580106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.580114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.666782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.666845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.060 [2024-11-19 00:04:44.666859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.666868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.737381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.737443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.060 [2024-11-19 00:04:44.737455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.737465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.737527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.737538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.060 [2024-11-19 00:04:44.737547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.737555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.737614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.737625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.060 [2024-11-19 00:04:44.737633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.737642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.737743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.737755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.060 [2024-11-19 00:04:44.737763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.737771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.737802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.737812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.060 [2024-11-19 00:04:44.737821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.737829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.737872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.737886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.060 [2024-11-19 00:04:44.737895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.737903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.737949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-19 00:04:44.737959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.060 [2024-11-19 00:04:44.737969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-19 00:04:44.737977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-19 00:04:44.738112] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.793 ms, result 0 00:20:39.004 00:20:39.004 00:20:39.004 00:04:45 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:40.920 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:40.921 00:04:47 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:41.182 [2024-11-19 00:04:47.657147] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:41.182 [2024-11-19 00:04:47.657276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76119 ] 00:20:41.182 [2024-11-19 00:04:47.818587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.486 [2024-11-19 00:04:47.930719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.770 [2024-11-19 00:04:48.217918] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.770 [2024-11-19 00:04:48.218006] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.770 [2024-11-19 00:04:48.378064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.770 [2024-11-19 00:04:48.378132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:41.770 [2024-11-19 00:04:48.378152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:41.770 [2024-11-19 00:04:48.378160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.770 [2024-11-19 00:04:48.378208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.770 [2024-11-19 00:04:48.378219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.770 [2024-11-19 00:04:48.378231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:41.770 [2024-11-19 00:04:48.378239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.770 [2024-11-19 00:04:48.378259] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:41.770 [2024-11-19 00:04:48.378930] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:41.770 [2024-11-19 00:04:48.378958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.770 [2024-11-19 00:04:48.378966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.770 [2024-11-19 00:04:48.378975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:20:41.770 [2024-11-19 00:04:48.378983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.770 [2024-11-19 00:04:48.380513] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:41.770 [2024-11-19 00:04:48.393603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.770 [2024-11-19 00:04:48.393645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:41.770 [2024-11-19 00:04:48.393658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.091 ms 00:20:41.770 [2024-11-19 00:04:48.393666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.770 [2024-11-19 00:04:48.393729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.770 [2024-11-19 00:04:48.393738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:41.770 [2024-11-19 00:04:48.393748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:41.770 [2024-11-19 00:04:48.393756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.770 [2024-11-19 00:04:48.399875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.770 [2024-11-19 00:04:48.399911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.770 [2024-11-19 00:04:48.399921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.066 ms 00:20:41.770 [2024-11-19 00:04:48.399929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.770 [2024-11-19 00:04:48.400007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.770 [2024-11-19 00:04:48.400016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.770 [2024-11-19 00:04:48.400024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:41.770 [2024-11-19 00:04:48.400032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.771 [2024-11-19 00:04:48.400083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.771 [2024-11-19 00:04:48.400093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:41.771 [2024-11-19 00:04:48.400103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:41.771 [2024-11-19 00:04:48.400111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.771 [2024-11-19 00:04:48.400149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.771 [2024-11-19 00:04:48.403575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.771 [2024-11-19 00:04:48.403608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.771 [2024-11-19 00:04:48.403618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.433 ms 00:20:41.771 [2024-11-19 00:04:48.403629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.771 [2024-11-19 00:04:48.403660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.771 [2024-11-19 00:04:48.403669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:41.771 [2024-11-19 00:04:48.403677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:41.771 [2024-11-19 00:04:48.403685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.771 [2024-11-19 00:04:48.403705] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:41.771 [2024-11-19 00:04:48.403722] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:41.771 [2024-11-19 00:04:48.403757] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:41.771 [2024-11-19 00:04:48.403774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:41.771 [2024-11-19 00:04:48.403877] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:41.771 [2024-11-19 00:04:48.403901] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:41.771 [2024-11-19 00:04:48.403912] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:41.771 [2024-11-19 00:04:48.403922] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:41.771 [2024-11-19 00:04:48.403931] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:41.771 [2024-11-19 00:04:48.403939] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:41.771 [2024-11-19 00:04:48.403948] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:41.771 [2024-11-19 00:04:48.403955] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:41.771 [2024-11-19 00:04:48.403962] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:41.771 [2024-11-19 00:04:48.403973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.771 [2024-11-19 00:04:48.403980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:41.771 [2024-11-19 00:04:48.403988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:41.771 [2024-11-19 00:04:48.403995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.771 [2024-11-19 00:04:48.404077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.771 [2024-11-19 00:04:48.404086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:41.771 [2024-11-19 00:04:48.404093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:41.771 [2024-11-19 00:04:48.404101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.771 [2024-11-19 00:04:48.404229] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:41.771 [2024-11-19 00:04:48.404250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:41.771 [2024-11-19 00:04:48.404258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.771 [2024-11-19 00:04:48.404267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:41.771 [2024-11-19 00:04:48.404282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:41.771 [2024-11-19 00:04:48.404296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:41.771 [2024-11-19 00:04:48.404303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.771 [2024-11-19 00:04:48.404317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:41.771 [2024-11-19 00:04:48.404324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:41.771 [2024-11-19 00:04:48.404331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.771 [2024-11-19 00:04:48.404337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:41.771 [2024-11-19 00:04:48.404344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:41.771 [2024-11-19 00:04:48.404358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:41.771 [2024-11-19 00:04:48.404372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:41.771 [2024-11-19 00:04:48.404379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:41.771 [2024-11-19 00:04:48.404392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.771 [2024-11-19 00:04:48.404406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:41.771 [2024-11-19 00:04:48.404413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.771 [2024-11-19 00:04:48.404426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:41.771 [2024-11-19 00:04:48.404432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.771 [2024-11-19 00:04:48.404445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:41.771 [2024-11-19 00:04:48.404451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.771 [2024-11-19 00:04:48.404463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:41.771 [2024-11-19 00:04:48.404470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.771 [2024-11-19 00:04:48.404482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:41.771 [2024-11-19 00:04:48.404488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:41.771 [2024-11-19 00:04:48.404494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.771 [2024-11-19 00:04:48.404501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:41.771 [2024-11-19 00:04:48.404507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:41.771 [2024-11-19 00:04:48.404514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:41.771 [2024-11-19 00:04:48.404528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:41.771 [2024-11-19 00:04:48.404534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404541] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:41.771 [2024-11-19 00:04:48.404549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:41.771 [2024-11-19 00:04:48.404556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.771 [2024-11-19 00:04:48.404563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.771 [2024-11-19 00:04:48.404571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:41.771 [2024-11-19 00:04:48.404579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:41.771 [2024-11-19 00:04:48.404585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:41.771 [2024-11-19 00:04:48.404592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:41.771 [2024-11-19 00:04:48.404599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:41.771 [2024-11-19 00:04:48.404605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:41.771 [2024-11-19 00:04:48.404613] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:41.771 [2024-11-19 00:04:48.404622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.771 [2024-11-19 00:04:48.404631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:41.772 [2024-11-19 00:04:48.404638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:41.772 [2024-11-19 00:04:48.404645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:41.772 [2024-11-19 00:04:48.404653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:41.772 [2024-11-19 00:04:48.404660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:41.772 [2024-11-19 00:04:48.404667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:41.772 [2024-11-19 00:04:48.404673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:41.772 [2024-11-19 00:04:48.404681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:41.772 [2024-11-19 00:04:48.404688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:41.772 [2024-11-19 00:04:48.404696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:41.772 [2024-11-19 00:04:48.404703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:41.772 [2024-11-19 00:04:48.404709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:41.772 [2024-11-19 00:04:48.404716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:41.772 [2024-11-19 00:04:48.404723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:41.772 [2024-11-19 00:04:48.404730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:41.772 [2024-11-19 00:04:48.404740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.772 [2024-11-19 00:04:48.404749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:41.772 [2024-11-19 00:04:48.404756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:41.772 [2024-11-19 00:04:48.404764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:41.772 [2024-11-19 00:04:48.404771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:41.772 [2024-11-19 00:04:48.404779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.772 [2024-11-19 00:04:48.404786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:41.772 [2024-11-19 00:04:48.404793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:20:41.772 [2024-11-19 00:04:48.404801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.772 [2024-11-19 00:04:48.432186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.772 [2024-11-19 00:04:48.432222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.772 [2024-11-19 00:04:48.432233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.342 ms 00:20:41.772 [2024-11-19 00:04:48.432241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.772 [2024-11-19 00:04:48.432327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.772 [2024-11-19 00:04:48.432336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:41.772 [2024-11-19 00:04:48.432345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:41.772 [2024-11-19 00:04:48.432353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.481005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.481050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.034 [2024-11-19 00:04:48.481062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.602 ms 00:20:42.034 [2024-11-19 00:04:48.481071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.481112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.481130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.034 [2024-11-19 00:04:48.481139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:42.034 [2024-11-19 00:04:48.481150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.481579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.481607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.034 [2024-11-19 00:04:48.481617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:20:42.034 [2024-11-19 00:04:48.481625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.481759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.481769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.034 [2024-11-19 00:04:48.481778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:20:42.034 [2024-11-19 00:04:48.481790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.495637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.495670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.034 [2024-11-19 00:04:48.495683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.828 ms 00:20:42.034 [2024-11-19 00:04:48.495691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.509014] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:42.034 [2024-11-19 00:04:48.509058] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:42.034 [2024-11-19 00:04:48.509070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.509078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:42.034 [2024-11-19 00:04:48.509087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.286 ms 00:20:42.034 [2024-11-19 00:04:48.509093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.534028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.534081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:42.034 [2024-11-19 00:04:48.534092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.881 ms 00:20:42.034 [2024-11-19 00:04:48.534100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.546650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.546698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:42.034 [2024-11-19 00:04:48.546709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.493 ms 00:20:42.034 [2024-11-19 00:04:48.546716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.559000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.559045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:42.034 [2024-11-19 00:04:48.559056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.239 ms 00:20:42.034 [2024-11-19 00:04:48.559062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.034 [2024-11-19 00:04:48.559732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.034 [2024-11-19 00:04:48.559769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.034 [2024-11-19 00:04:48.559780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:42.034 [2024-11-19 00:04:48.559791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.035 [2024-11-19 00:04:48.625986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.035 [2024-11-19 00:04:48.626050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:42.035 [2024-11-19 00:04:48.626074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.175 ms 00:20:42.035 [2024-11-19 00:04:48.626083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.035 [2024-11-19 00:04:48.637170] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:42.035 [2024-11-19 00:04:48.640303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.035 [2024-11-19 00:04:48.640350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.035 [2024-11-19 00:04:48.640363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.139 ms 00:20:42.035 [2024-11-19 00:04:48.640371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.035 [2024-11-19 00:04:48.640462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.035 [2024-11-19 00:04:48.640474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:42.035 [2024-11-19 00:04:48.640484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:42.035 [2024-11-19 00:04:48.640495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.035 [2024-11-19 00:04:48.640567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.035 [2024-11-19 00:04:48.640578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.035 [2024-11-19 00:04:48.640587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:42.035 [2024-11-19 00:04:48.640595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.035 [2024-11-19 00:04:48.640617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.035 [2024-11-19 00:04:48.640625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.035 [2024-11-19 00:04:48.640634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.035 [2024-11-19 00:04:48.640642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.035 [2024-11-19 00:04:48.640677] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:42.035 [2024-11-19 00:04:48.640691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.035 [2024-11-19 00:04:48.640700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:42.035 [2024-11-19 00:04:48.640708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:42.035 [2024-11-19 00:04:48.640716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.035 [2024-11-19 00:04:48.666817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.035 [2024-11-19 00:04:48.666871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.035 [2024-11-19 00:04:48.666885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.083 ms 00:20:42.035 [2024-11-19 00:04:48.666899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.035 [2024-11-19 00:04:48.666988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.035 [2024-11-19 00:04:48.667000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.035 [2024-11-19 00:04:48.667010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:42.035 [2024-11-19 00:04:48.667018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.035 [2024-11-19 00:04:48.668458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.867 ms, result 0 00:20:43.424  [2024-11-19T00:04:50.689Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-19T00:04:52.076Z] Copying: 46/1024 [MB] (32 MBps) [2024-11-19T00:04:53.022Z] Copying: 82/1024 [MB] (35 MBps) [2024-11-19T00:04:53.966Z] Copying: 99/1024 [MB] (16 MBps) [2024-11-19T00:04:54.911Z] Copying: 111/1024 [MB] (12 MBps) [2024-11-19T00:04:55.856Z] Copying: 127/1024 [MB] (15 MBps) [2024-11-19T00:04:56.801Z] Copying: 146/1024 [MB] (18 MBps) [2024-11-19T00:04:57.744Z] Copying: 165/1024 [MB] (19 MBps) [2024-11-19T00:04:58.688Z] Copying: 181/1024 [MB] (16 MBps) [2024-11-19T00:05:00.089Z] Copying: 199/1024 [MB] (17 MBps) [2024-11-19T00:05:01.033Z] Copying: 234/1024 [MB] (35 MBps) [2024-11-19T00:05:01.977Z] Copying: 278/1024 [MB] (43 MBps) [2024-11-19T00:05:02.921Z] Copying: 316/1024 [MB] (38 MBps) [2024-11-19T00:05:03.866Z] Copying: 329/1024 [MB] (13 MBps) [2024-11-19T00:05:04.811Z] Copying: 348/1024 [MB] (18 MBps) [2024-11-19T00:05:05.755Z] Copying: 368/1024 [MB] (19 MBps) [2024-11-19T00:05:06.699Z] Copying: 412/1024 [MB] (44 MBps) [2024-11-19T00:05:08.087Z] Copying: 442/1024 [MB] (29 MBps) [2024-11-19T00:05:09.030Z] Copying: 459/1024 [MB] (17 MBps) [2024-11-19T00:05:10.005Z] Copying: 491/1024 [MB] (32 MBps) [2024-11-19T00:05:10.949Z] Copying: 536/1024 [MB] (44 MBps) [2024-11-19T00:05:11.892Z] Copying: 551/1024 [MB] (15 MBps) [2024-11-19T00:05:12.838Z] Copying: 583/1024 [MB] (32 MBps) [2024-11-19T00:05:13.780Z] Copying: 628/1024 [MB] (45 MBps) [2024-11-19T00:05:14.726Z] Copying: 650/1024 [MB] (21 MBps) [2024-11-19T00:05:16.114Z] Copying: 663/1024 [MB] (13 MBps) [2024-11-19T00:05:16.688Z] Copying: 683/1024 [MB] (19 MBps) [2024-11-19T00:05:17.706Z] Copying: 701/1024 [MB] (18 MBps) [2024-11-19T00:05:19.094Z] Copying: 734/1024 [MB] (32 MBps) [2024-11-19T00:05:20.045Z] Copying: 754/1024 [MB] (19 MBps) [2024-11-19T00:05:20.988Z] Copying: 793/1024 [MB] (39 MBps) [2024-11-19T00:05:21.932Z] Copying: 810/1024 [MB] (17 MBps) [2024-11-19T00:05:22.874Z] Copying: 834/1024 [MB] (24 MBps) [2024-11-19T00:05:23.816Z] Copying: 880/1024 [MB] (45 MBps) [2024-11-19T00:05:24.759Z] Copying: 926/1024 [MB] (46 MBps) [2024-11-19T00:05:25.702Z] Copying: 954/1024 [MB] (28 MBps) [2024-11-19T00:05:27.089Z] Copying: 980/1024 [MB] (25 MBps) [2024-11-19T00:05:28.031Z] Copying: 1008/1024 [MB] (28 MBps) [2024-11-19T00:05:28.995Z] Copying: 1019/1024 [MB] (10 MBps) [2024-11-19T00:05:28.995Z] Copying: 1048300/1048576 [kB] (4732 kBps) [2024-11-19T00:05:28.995Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-19 00:05:28.909744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.303 [2024-11-19 00:05:28.909880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.304 [2024-11-19 00:05:28.909897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:22.304 [2024-11-19 00:05:28.909911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.304 [2024-11-19 00:05:28.910659] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.304 [2024-11-19 00:05:28.914367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.304 [2024-11-19 00:05:28.914398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.304 [2024-11-19 00:05:28.914407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.687 ms 00:21:22.304 [2024-11-19 00:05:28.914415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.304 [2024-11-19 00:05:28.924221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.304 [2024-11-19 00:05:28.924251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.304 [2024-11-19 00:05:28.924259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.342 ms 00:21:22.304 [2024-11-19 00:05:28.924266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.304 [2024-11-19 00:05:28.940840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.304 [2024-11-19 00:05:28.940868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.304 [2024-11-19 00:05:28.940877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.558 ms 00:21:22.304 [2024-11-19 00:05:28.940883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.304 [2024-11-19 00:05:28.945744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.304 [2024-11-19 00:05:28.945767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.304 [2024-11-19 00:05:28.945775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.839 ms 00:21:22.304 [2024-11-19 00:05:28.945782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.304 [2024-11-19 00:05:28.963793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.304 [2024-11-19 00:05:28.963821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.304 [2024-11-19 00:05:28.963829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.976 ms 00:21:22.304 [2024-11-19 00:05:28.963834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.304 [2024-11-19 00:05:28.974923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.304 [2024-11-19 00:05:28.974950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.304 [2024-11-19 00:05:28.974960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.063 ms 00:21:22.304 [2024-11-19 00:05:28.974966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.566 [2024-11-19 00:05:29.036893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.566 [2024-11-19 00:05:29.036923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.566 [2024-11-19 00:05:29.036932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.908 ms 00:21:22.566 [2024-11-19 00:05:29.036938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.566 [2024-11-19 00:05:29.054407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.566 [2024-11-19 00:05:29.054433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:22.567 [2024-11-19 00:05:29.054441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.457 ms 00:21:22.567 [2024-11-19 00:05:29.054446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.567 [2024-11-19 00:05:29.071654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.567 [2024-11-19 00:05:29.071685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:22.567 [2024-11-19 00:05:29.071693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.184 ms 00:21:22.567 [2024-11-19 00:05:29.071698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.567 [2024-11-19 00:05:29.088503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.567 [2024-11-19 00:05:29.088529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.567 [2024-11-19 00:05:29.088537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.781 ms 00:21:22.567 [2024-11-19 00:05:29.088543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.567 [2024-11-19 00:05:29.105243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.567 [2024-11-19 00:05:29.105267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.567 [2024-11-19 00:05:29.105274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.660 ms 00:21:22.567 [2024-11-19 00:05:29.105279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.567 [2024-11-19 00:05:29.105302] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.567 [2024-11-19 00:05:29.105312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106496 / 261120 wr_cnt: 1 state: open 00:21:22.567 [2024-11-19 00:05:29.105320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.567 [2024-11-19 00:05:29.105731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.568 [2024-11-19 00:05:29.105875] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.568 [2024-11-19 00:05:29.105881] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cac7f8-48cf-499b-b792-6ff17c544639 00:21:22.568 [2024-11-19 00:05:29.105886] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106496 00:21:22.568 [2024-11-19 00:05:29.105892] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107456 00:21:22.568 [2024-11-19 00:05:29.105897] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106496 00:21:22.568 [2024-11-19 00:05:29.105903] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:21:22.568 [2024-11-19 00:05:29.105908] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.568 [2024-11-19 00:05:29.105917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.568 [2024-11-19 00:05:29.105926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.568 [2024-11-19 00:05:29.105931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.568 [2024-11-19 00:05:29.105936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.568 [2024-11-19 00:05:29.105941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.568 [2024-11-19 00:05:29.105947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.568 [2024-11-19 00:05:29.105953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:21:22.568 [2024-11-19 00:05:29.105958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.115421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.568 [2024-11-19 00:05:29.115446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.568 [2024-11-19 00:05:29.115454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.452 ms 00:21:22.568 [2024-11-19 00:05:29.115462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.115743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.568 [2024-11-19 00:05:29.115755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.568 [2024-11-19 00:05:29.115762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:21:22.568 [2024-11-19 00:05:29.115768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.141175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.141201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.568 [2024-11-19 00:05:29.141214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.141220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.141258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.141264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.568 [2024-11-19 00:05:29.141270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.141275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.141314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.141322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.568 [2024-11-19 00:05:29.141328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.141336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.141346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.141352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.568 [2024-11-19 00:05:29.141358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.141363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.199435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.199465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.568 [2024-11-19 00:05:29.199476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.199482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.247693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.247725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.568 [2024-11-19 00:05:29.247733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.247740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.247788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.247795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.568 [2024-11-19 00:05:29.247801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.247807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.247834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.247841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.568 [2024-11-19 00:05:29.247847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.247852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.247915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.247923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.568 [2024-11-19 00:05:29.247929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.247935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.247958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.247965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:22.568 [2024-11-19 00:05:29.247971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.247976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.248003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.248009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.568 [2024-11-19 00:05:29.248016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.248021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.248054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.568 [2024-11-19 00:05:29.248067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.568 [2024-11-19 00:05:29.248073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.568 [2024-11-19 00:05:29.248078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.568 [2024-11-19 00:05:29.248179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.563 ms, result 0 00:21:23.956 00:21:23.956 00:21:23.956 00:05:30 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:23.956 [2024-11-19 00:05:30.579708] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:23.956 [2024-11-19 00:05:30.579847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76558 ] 00:21:24.217 [2024-11-19 00:05:30.738480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.217 [2024-11-19 00:05:30.822524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.480 [2024-11-19 00:05:31.026585] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.480 [2024-11-19 00:05:31.026636] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.741 [2024-11-19 00:05:31.177748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.177783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:24.741 [2024-11-19 00:05:31.177796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:24.741 [2024-11-19 00:05:31.177803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.177837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.177845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.741 [2024-11-19 00:05:31.177852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:24.741 [2024-11-19 00:05:31.177858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.177870] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:24.741 [2024-11-19 00:05:31.178440] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:24.741 [2024-11-19 00:05:31.178458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.178463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.741 [2024-11-19 00:05:31.178470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:21:24.741 [2024-11-19 00:05:31.178476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.179418] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:24.741 [2024-11-19 00:05:31.189009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.189117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:24.741 [2024-11-19 00:05:31.189140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.592 ms 00:21:24.741 [2024-11-19 00:05:31.189146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.189198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.189206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:24.741 [2024-11-19 00:05:31.189212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:24.741 [2024-11-19 00:05:31.189217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.193529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.193554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.741 [2024-11-19 00:05:31.193562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.267 ms 00:21:24.741 [2024-11-19 00:05:31.193567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.193623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.193630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.741 [2024-11-19 00:05:31.193636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:24.741 [2024-11-19 00:05:31.193642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.193673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.193680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:24.741 [2024-11-19 00:05:31.193686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:24.741 [2024-11-19 00:05:31.193691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.193704] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:24.741 [2024-11-19 00:05:31.196270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.196369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.741 [2024-11-19 00:05:31.196381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:21:24.741 [2024-11-19 00:05:31.196390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.196417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.741 [2024-11-19 00:05:31.196424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:24.741 [2024-11-19 00:05:31.196430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:24.741 [2024-11-19 00:05:31.196436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.741 [2024-11-19 00:05:31.196449] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:24.741 [2024-11-19 00:05:31.196462] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:24.741 [2024-11-19 00:05:31.196489] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:24.741 [2024-11-19 00:05:31.196502] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:24.741 [2024-11-19 00:05:31.196580] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:24.741 [2024-11-19 00:05:31.196588] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:24.741 [2024-11-19 00:05:31.196596] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:24.741 [2024-11-19 00:05:31.196603] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:24.741 [2024-11-19 00:05:31.196610] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:24.741 [2024-11-19 00:05:31.196616] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:24.741 [2024-11-19 00:05:31.196622] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:24.741 [2024-11-19 00:05:31.196628] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:24.741 [2024-11-19 00:05:31.196633] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:24.741 [2024-11-19 00:05:31.196641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.742 [2024-11-19 00:05:31.196646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:24.742 [2024-11-19 00:05:31.196652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:21:24.742 [2024-11-19 00:05:31.196658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.742 [2024-11-19 00:05:31.196720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.742 [2024-11-19 00:05:31.196727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:24.742 [2024-11-19 00:05:31.196732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:24.742 [2024-11-19 00:05:31.196737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.742 [2024-11-19 00:05:31.196811] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:24.742 [2024-11-19 00:05:31.196821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:24.742 [2024-11-19 00:05:31.196827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.742 [2024-11-19 00:05:31.196833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:24.742 [2024-11-19 00:05:31.196844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:24.742 [2024-11-19 00:05:31.196854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:24.742 [2024-11-19 00:05:31.196860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.742 [2024-11-19 00:05:31.196870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:24.742 [2024-11-19 00:05:31.196875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:24.742 [2024-11-19 00:05:31.196880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.742 [2024-11-19 00:05:31.196885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:24.742 [2024-11-19 00:05:31.196891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:24.742 [2024-11-19 00:05:31.196900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:24.742 [2024-11-19 00:05:31.196910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:24.742 [2024-11-19 00:05:31.196915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:24.742 [2024-11-19 00:05:31.196925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.742 [2024-11-19 00:05:31.196934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:24.742 [2024-11-19 00:05:31.196939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.742 [2024-11-19 00:05:31.196949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:24.742 [2024-11-19 00:05:31.196955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.742 [2024-11-19 00:05:31.196964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:24.742 [2024-11-19 00:05:31.196969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.742 [2024-11-19 00:05:31.196979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:24.742 [2024-11-19 00:05:31.196984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:24.742 [2024-11-19 00:05:31.196989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.742 [2024-11-19 00:05:31.196994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:24.742 [2024-11-19 00:05:31.196999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:24.742 [2024-11-19 00:05:31.197004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.742 [2024-11-19 00:05:31.197009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:24.742 [2024-11-19 00:05:31.197013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:24.742 [2024-11-19 00:05:31.197018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.742 [2024-11-19 00:05:31.197023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:24.742 [2024-11-19 00:05:31.197027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:24.742 [2024-11-19 00:05:31.197032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.742 [2024-11-19 00:05:31.197037] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:24.742 [2024-11-19 00:05:31.197043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:24.742 [2024-11-19 00:05:31.197048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.742 [2024-11-19 00:05:31.197055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.742 [2024-11-19 00:05:31.197061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:24.742 [2024-11-19 00:05:31.197067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:24.742 [2024-11-19 00:05:31.197072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:24.742 [2024-11-19 00:05:31.197077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:24.742 [2024-11-19 00:05:31.197081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:24.742 [2024-11-19 00:05:31.197086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:24.742 [2024-11-19 00:05:31.197092] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:24.742 [2024-11-19 00:05:31.197099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.742 [2024-11-19 00:05:31.197106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:24.742 [2024-11-19 00:05:31.197111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:24.742 [2024-11-19 00:05:31.197117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:24.742 [2024-11-19 00:05:31.197132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:24.742 [2024-11-19 00:05:31.197138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:24.742 [2024-11-19 00:05:31.197144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:24.742 [2024-11-19 00:05:31.197149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:24.742 [2024-11-19 00:05:31.197154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:24.742 [2024-11-19 00:05:31.197159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:24.742 [2024-11-19 00:05:31.197165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:24.742 [2024-11-19 00:05:31.197170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:24.742 [2024-11-19 00:05:31.197176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:24.742 [2024-11-19 00:05:31.197182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:24.742 [2024-11-19 00:05:31.197187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:24.742 [2024-11-19 00:05:31.197193] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:24.742 [2024-11-19 00:05:31.197201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.742 [2024-11-19 00:05:31.197207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:24.743 [2024-11-19 00:05:31.197213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:24.743 [2024-11-19 00:05:31.197219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:24.743 [2024-11-19 00:05:31.197225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:24.743 [2024-11-19 00:05:31.197230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.197236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:24.743 [2024-11-19 00:05:31.197242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:21:24.743 [2024-11-19 00:05:31.197248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.218179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.218215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.743 [2024-11-19 00:05:31.218224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.899 ms 00:21:24.743 [2024-11-19 00:05:31.218230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.218294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.218301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.743 [2024-11-19 00:05:31.218307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:24.743 [2024-11-19 00:05:31.218312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.259025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.259056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.743 [2024-11-19 00:05:31.259065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.674 ms 00:21:24.743 [2024-11-19 00:05:31.259071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.259100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.259107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.743 [2024-11-19 00:05:31.259114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:24.743 [2024-11-19 00:05:31.259136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.259460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.259472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.743 [2024-11-19 00:05:31.259479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:21:24.743 [2024-11-19 00:05:31.259485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.259597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.259605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.743 [2024-11-19 00:05:31.259611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:24.743 [2024-11-19 00:05:31.259617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.269937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.269963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.743 [2024-11-19 00:05:31.269970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.301 ms 00:21:24.743 [2024-11-19 00:05:31.269978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.279781] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:24.743 [2024-11-19 00:05:31.279809] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:24.743 [2024-11-19 00:05:31.279818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.279825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:24.743 [2024-11-19 00:05:31.279831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.763 ms 00:21:24.743 [2024-11-19 00:05:31.279837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.298820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.298927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.743 [2024-11-19 00:05:31.298940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.953 ms 00:21:24.743 [2024-11-19 00:05:31.298946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.307987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.308018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.743 [2024-11-19 00:05:31.308026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.020 ms 00:21:24.743 [2024-11-19 00:05:31.308031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.316718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.316742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.743 [2024-11-19 00:05:31.316750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.662 ms 00:21:24.743 [2024-11-19 00:05:31.316755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.317213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.317298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.743 [2024-11-19 00:05:31.317309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:21:24.743 [2024-11-19 00:05:31.317318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.360930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.361070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.743 [2024-11-19 00:05:31.361088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.596 ms 00:21:24.743 [2024-11-19 00:05:31.361094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.368793] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.743 [2024-11-19 00:05:31.370431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.370453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.743 [2024-11-19 00:05:31.370461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.309 ms 00:21:24.743 [2024-11-19 00:05:31.370467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.370517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.370526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.743 [2024-11-19 00:05:31.370534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:24.743 [2024-11-19 00:05:31.370541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.371582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.371674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.743 [2024-11-19 00:05:31.371685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:21:24.743 [2024-11-19 00:05:31.371691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.371710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.371717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.743 [2024-11-19 00:05:31.371724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:24.743 [2024-11-19 00:05:31.371730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.371768] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.743 [2024-11-19 00:05:31.371779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.371785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.743 [2024-11-19 00:05:31.371791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:24.743 [2024-11-19 00:05:31.371796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.389312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.743 [2024-11-19 00:05:31.389337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.743 [2024-11-19 00:05:31.389346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.502 ms 00:21:24.743 [2024-11-19 00:05:31.389354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.743 [2024-11-19 00:05:31.389409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.744 [2024-11-19 00:05:31.389416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.744 [2024-11-19 00:05:31.389422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:24.744 [2024-11-19 00:05:31.389428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.744 [2024-11-19 00:05:31.390141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 212.053 ms, result 0 00:21:26.125  [2024-11-19T00:05:33.760Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-19T00:05:34.704Z] Copying: 36/1024 [MB] (19 MBps) [2024-11-19T00:05:35.649Z] Copying: 57/1024 [MB] (21 MBps) [2024-11-19T00:05:36.594Z] Copying: 77/1024 [MB] (20 MBps) [2024-11-19T00:05:37.539Z] Copying: 98/1024 [MB] (20 MBps) [2024-11-19T00:05:38.928Z] Copying: 119/1024 [MB] (20 MBps) [2024-11-19T00:05:39.871Z] Copying: 138/1024 [MB] (18 MBps) [2024-11-19T00:05:40.817Z] Copying: 156/1024 [MB] (17 MBps) [2024-11-19T00:05:41.762Z] Copying: 170/1024 [MB] (13 MBps) [2024-11-19T00:05:42.707Z] Copying: 185/1024 [MB] (15 MBps) [2024-11-19T00:05:43.651Z] Copying: 203/1024 [MB] (17 MBps) [2024-11-19T00:05:44.596Z] Copying: 216/1024 [MB] (13 MBps) [2024-11-19T00:05:45.541Z] Copying: 227/1024 [MB] (10 MBps) [2024-11-19T00:05:46.570Z] Copying: 237/1024 [MB] (10 MBps) [2024-11-19T00:05:47.958Z] Copying: 248/1024 [MB] (10 MBps) [2024-11-19T00:05:48.902Z] Copying: 263/1024 [MB] (14 MBps) [2024-11-19T00:05:49.844Z] Copying: 274/1024 [MB] (10 MBps) [2024-11-19T00:05:50.788Z] Copying: 284/1024 [MB] (10 MBps) [2024-11-19T00:05:51.732Z] Copying: 301/1024 [MB] (16 MBps) [2024-11-19T00:05:52.675Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-19T00:05:53.619Z] Copying: 332/1024 [MB] (20 MBps) [2024-11-19T00:05:54.569Z] Copying: 349/1024 [MB] (17 MBps) [2024-11-19T00:05:55.955Z] Copying: 360/1024 [MB] (11 MBps) [2024-11-19T00:05:56.900Z] Copying: 371/1024 [MB] (10 MBps) [2024-11-19T00:05:57.844Z] Copying: 385/1024 [MB] (14 MBps) [2024-11-19T00:05:58.789Z] Copying: 398/1024 [MB] (12 MBps) [2024-11-19T00:05:59.734Z] Copying: 415/1024 [MB] (16 MBps) [2024-11-19T00:06:00.680Z] Copying: 428/1024 [MB] (13 MBps) [2024-11-19T00:06:01.620Z] Copying: 439/1024 [MB] (10 MBps) [2024-11-19T00:06:02.564Z] Copying: 450/1024 [MB] (10 MBps) [2024-11-19T00:06:03.953Z] Copying: 469/1024 [MB] (19 MBps) [2024-11-19T00:06:04.898Z] Copying: 486/1024 [MB] (17 MBps) [2024-11-19T00:06:05.844Z] Copying: 503/1024 [MB] (17 MBps) [2024-11-19T00:06:06.788Z] Copying: 522/1024 [MB] (18 MBps) [2024-11-19T00:06:07.730Z] Copying: 535/1024 [MB] (13 MBps) [2024-11-19T00:06:08.671Z] Copying: 546/1024 [MB] (10 MBps) [2024-11-19T00:06:09.613Z] Copying: 560/1024 [MB] (13 MBps) [2024-11-19T00:06:10.554Z] Copying: 578/1024 [MB] (18 MBps) [2024-11-19T00:06:11.936Z] Copying: 595/1024 [MB] (16 MBps) [2024-11-19T00:06:12.877Z] Copying: 610/1024 [MB] (15 MBps) [2024-11-19T00:06:13.818Z] Copying: 623/1024 [MB] (12 MBps) [2024-11-19T00:06:14.757Z] Copying: 637/1024 [MB] (14 MBps) [2024-11-19T00:06:15.754Z] Copying: 648/1024 [MB] (10 MBps) [2024-11-19T00:06:16.698Z] Copying: 661/1024 [MB] (12 MBps) [2024-11-19T00:06:17.644Z] Copying: 675/1024 [MB] (14 MBps) [2024-11-19T00:06:18.587Z] Copying: 686/1024 [MB] (10 MBps) [2024-11-19T00:06:19.975Z] Copying: 696/1024 [MB] (10 MBps) [2024-11-19T00:06:20.548Z] Copying: 711/1024 [MB] (14 MBps) [2024-11-19T00:06:21.937Z] Copying: 724/1024 [MB] (12 MBps) [2024-11-19T00:06:22.882Z] Copying: 742/1024 [MB] (17 MBps) [2024-11-19T00:06:23.826Z] Copying: 759/1024 [MB] (17 MBps) [2024-11-19T00:06:24.770Z] Copying: 772/1024 [MB] (12 MBps) [2024-11-19T00:06:25.715Z] Copying: 796/1024 [MB] (24 MBps) [2024-11-19T00:06:26.663Z] Copying: 810/1024 [MB] (13 MBps) [2024-11-19T00:06:27.608Z] Copying: 821/1024 [MB] (11 MBps) [2024-11-19T00:06:28.553Z] Copying: 836/1024 [MB] (14 MBps) [2024-11-19T00:06:29.937Z] Copying: 857/1024 [MB] (20 MBps) [2024-11-19T00:06:30.881Z] Copying: 869/1024 [MB] (12 MBps) [2024-11-19T00:06:31.823Z] Copying: 889/1024 [MB] (20 MBps) [2024-11-19T00:06:32.765Z] Copying: 904/1024 [MB] (15 MBps) [2024-11-19T00:06:33.710Z] Copying: 915/1024 [MB] (11 MBps) [2024-11-19T00:06:34.652Z] Copying: 926/1024 [MB] (11 MBps) [2024-11-19T00:06:35.593Z] Copying: 948/1024 [MB] (21 MBps) [2024-11-19T00:06:36.978Z] Copying: 961/1024 [MB] (12 MBps) [2024-11-19T00:06:37.551Z] Copying: 972/1024 [MB] (10 MBps) [2024-11-19T00:06:38.939Z] Copying: 984/1024 [MB] (11 MBps) [2024-11-19T00:06:39.881Z] Copying: 1003/1024 [MB] (19 MBps) [2024-11-19T00:06:40.455Z] Copying: 1017/1024 [MB] (13 MBps) [2024-11-19T00:06:40.455Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-19 00:06:40.240401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.763 [2024-11-19 00:06:40.240488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:33.763 [2024-11-19 00:06:40.240505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:33.763 [2024-11-19 00:06:40.240515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.763 [2024-11-19 00:06:40.240552] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:33.763 [2024-11-19 00:06:40.243581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.763 [2024-11-19 00:06:40.243873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:33.763 [2024-11-19 00:06:40.243897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:22:33.763 [2024-11-19 00:06:40.243907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.763 [2024-11-19 00:06:40.244212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.763 [2024-11-19 00:06:40.244225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:33.763 [2024-11-19 00:06:40.244235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:22:33.763 [2024-11-19 00:06:40.244244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.763 [2024-11-19 00:06:40.250168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.763 [2024-11-19 00:06:40.250214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:33.763 [2024-11-19 00:06:40.250225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.903 ms 00:22:33.763 [2024-11-19 00:06:40.250234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.763 [2024-11-19 00:06:40.256779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.764 [2024-11-19 00:06:40.256812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:33.764 [2024-11-19 00:06:40.256824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.502 ms 00:22:33.764 [2024-11-19 00:06:40.256832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.764 [2024-11-19 00:06:40.283820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.764 [2024-11-19 00:06:40.283859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:33.764 [2024-11-19 00:06:40.283871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.913 ms 00:22:33.764 [2024-11-19 00:06:40.283879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.764 [2024-11-19 00:06:40.299941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.764 [2024-11-19 00:06:40.299984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:33.764 [2024-11-19 00:06:40.299997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.015 ms 00:22:33.764 [2024-11-19 00:06:40.300006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.026 [2024-11-19 00:06:40.694260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.026 [2024-11-19 00:06:40.694304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:34.026 [2024-11-19 00:06:40.694316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 394.200 ms 00:22:34.026 [2024-11-19 00:06:40.694326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.289 [2024-11-19 00:06:40.720385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.289 [2024-11-19 00:06:40.720422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:34.289 [2024-11-19 00:06:40.720435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.043 ms 00:22:34.289 [2024-11-19 00:06:40.720443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.289 [2024-11-19 00:06:40.745578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.289 [2024-11-19 00:06:40.745624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:34.289 [2024-11-19 00:06:40.745648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.089 ms 00:22:34.289 [2024-11-19 00:06:40.745656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.289 [2024-11-19 00:06:40.770276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.289 [2024-11-19 00:06:40.770320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:34.289 [2024-11-19 00:06:40.770332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.573 ms 00:22:34.289 [2024-11-19 00:06:40.770339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.289 [2024-11-19 00:06:40.794909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.289 [2024-11-19 00:06:40.794968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:34.289 [2024-11-19 00:06:40.794980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.500 ms 00:22:34.289 [2024-11-19 00:06:40.794988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.289 [2024-11-19 00:06:40.795031] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:34.289 [2024-11-19 00:06:40.795047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:34.289 [2024-11-19 00:06:40.795058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:34.289 [2024-11-19 00:06:40.795295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:34.290 [2024-11-19 00:06:40.795921] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:34.290 [2024-11-19 00:06:40.795931] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cac7f8-48cf-499b-b792-6ff17c544639 00:22:34.290 [2024-11-19 00:06:40.795941] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:34.290 [2024-11-19 00:06:40.795950] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 25536 00:22:34.290 [2024-11-19 00:06:40.795958] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 24576 00:22:34.290 [2024-11-19 00:06:40.795967] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0391 00:22:34.290 [2024-11-19 00:06:40.795975] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:34.290 [2024-11-19 00:06:40.795990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:34.290 [2024-11-19 00:06:40.796000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:34.290 [2024-11-19 00:06:40.796013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:34.290 [2024-11-19 00:06:40.796020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:34.290 [2024-11-19 00:06:40.796029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.290 [2024-11-19 00:06:40.796037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:34.290 [2024-11-19 00:06:40.796046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:22:34.290 [2024-11-19 00:06:40.796054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.290 [2024-11-19 00:06:40.809457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.290 [2024-11-19 00:06:40.809500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:34.290 [2024-11-19 00:06:40.809512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.383 ms 00:22:34.291 [2024-11-19 00:06:40.809526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.291 [2024-11-19 00:06:40.809924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.291 [2024-11-19 00:06:40.809946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:34.291 [2024-11-19 00:06:40.809957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:22:34.291 [2024-11-19 00:06:40.809965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.291 [2024-11-19 00:06:40.846271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.291 [2024-11-19 00:06:40.846318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.291 [2024-11-19 00:06:40.846337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.291 [2024-11-19 00:06:40.846347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.291 [2024-11-19 00:06:40.846409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.291 [2024-11-19 00:06:40.846422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.291 [2024-11-19 00:06:40.846431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.291 [2024-11-19 00:06:40.846440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.291 [2024-11-19 00:06:40.846515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.291 [2024-11-19 00:06:40.846527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.291 [2024-11-19 00:06:40.846537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.291 [2024-11-19 00:06:40.846550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.291 [2024-11-19 00:06:40.846567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.291 [2024-11-19 00:06:40.846576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.291 [2024-11-19 00:06:40.846587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.291 [2024-11-19 00:06:40.846597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.291 [2024-11-19 00:06:40.929506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.291 [2024-11-19 00:06:40.929560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.291 [2024-11-19 00:06:40.929579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.291 [2024-11-19 00:06:40.929588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.552 [2024-11-19 00:06:40.997599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.552 [2024-11-19 00:06:40.997651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.552 [2024-11-19 00:06:40.997662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.552 [2024-11-19 00:06:40.997671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.552 [2024-11-19 00:06:40.997724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.552 [2024-11-19 00:06:40.997735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:34.552 [2024-11-19 00:06:40.997744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.552 [2024-11-19 00:06:40.997753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.552 [2024-11-19 00:06:40.997813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.552 [2024-11-19 00:06:40.997823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:34.552 [2024-11-19 00:06:40.997832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.553 [2024-11-19 00:06:40.997840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.553 [2024-11-19 00:06:40.997936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.553 [2024-11-19 00:06:40.997948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:34.553 [2024-11-19 00:06:40.997958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.553 [2024-11-19 00:06:40.997966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.553 [2024-11-19 00:06:40.998006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.553 [2024-11-19 00:06:40.998016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:34.553 [2024-11-19 00:06:40.998024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.553 [2024-11-19 00:06:40.998033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.553 [2024-11-19 00:06:40.998076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.553 [2024-11-19 00:06:40.998088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:34.553 [2024-11-19 00:06:40.998097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.553 [2024-11-19 00:06:40.998106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.553 [2024-11-19 00:06:40.998192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.553 [2024-11-19 00:06:40.998205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:34.553 [2024-11-19 00:06:40.998213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.553 [2024-11-19 00:06:40.998222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.553 [2024-11-19 00:06:40.998359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 757.918 ms, result 0 00:22:35.124 00:22:35.125 00:22:35.125 00:06:41 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:37.701 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:37.701 Process with pid 74535 is not found 00:22:37.701 Remove shared memory files 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74535 00:22:37.701 00:06:44 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74535 ']' 00:22:37.701 00:06:44 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74535 00:22:37.701 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74535) - No such process 00:22:37.701 00:06:44 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 74535 is not found' 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:37.701 00:06:44 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:37.701 ************************************ 00:22:37.701 END TEST ftl_restore 00:22:37.701 ************************************ 00:22:37.701 00:22:37.701 real 4m27.769s 00:22:37.701 user 4m15.874s 00:22:37.701 sys 0m12.010s 00:22:37.701 00:06:44 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.701 00:06:44 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:37.701 00:06:44 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:37.701 00:06:44 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:37.701 00:06:44 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.701 00:06:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:37.701 ************************************ 00:22:37.701 START TEST ftl_dirty_shutdown 00:22:37.701 ************************************ 00:22:37.701 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:37.701 * Looking for test storage... 00:22:37.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:37.701 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:37.701 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:37.701 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:37.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.978 --rc genhtml_branch_coverage=1 00:22:37.978 --rc genhtml_function_coverage=1 00:22:37.978 --rc genhtml_legend=1 00:22:37.978 --rc geninfo_all_blocks=1 00:22:37.978 --rc geninfo_unexecuted_blocks=1 00:22:37.978 00:22:37.978 ' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:37.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.978 --rc genhtml_branch_coverage=1 00:22:37.978 --rc genhtml_function_coverage=1 00:22:37.978 --rc genhtml_legend=1 00:22:37.978 --rc geninfo_all_blocks=1 00:22:37.978 --rc geninfo_unexecuted_blocks=1 00:22:37.978 00:22:37.978 ' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:37.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.978 --rc genhtml_branch_coverage=1 00:22:37.978 --rc genhtml_function_coverage=1 00:22:37.978 --rc genhtml_legend=1 00:22:37.978 --rc geninfo_all_blocks=1 00:22:37.978 --rc geninfo_unexecuted_blocks=1 00:22:37.978 00:22:37.978 ' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:37.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.978 --rc genhtml_branch_coverage=1 00:22:37.978 --rc genhtml_function_coverage=1 00:22:37.978 --rc genhtml_legend=1 00:22:37.978 --rc geninfo_all_blocks=1 00:22:37.978 --rc geninfo_unexecuted_blocks=1 00:22:37.978 00:22:37.978 ' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:37.978 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77382 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77382 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77382 ']' 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.979 00:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:37.979 [2024-11-19 00:06:44.530766] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:37.979 [2024-11-19 00:06:44.531038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77382 ] 00:22:38.240 [2024-11-19 00:06:44.689585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.240 [2024-11-19 00:06:44.764549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.813 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.813 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:38.813 00:06:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:38.813 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:38.813 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:38.813 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:38.813 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:38.813 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:39.074 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:39.074 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:39.074 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:39.074 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:39.074 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:39.074 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:39.074 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:39.074 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:39.334 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:39.334 { 00:22:39.334 "name": "nvme0n1", 00:22:39.334 "aliases": [ 00:22:39.334 "1b8c3b0a-e846-41a7-9901-5abeff86302d" 00:22:39.334 ], 00:22:39.334 "product_name": "NVMe disk", 00:22:39.334 "block_size": 4096, 00:22:39.334 "num_blocks": 1310720, 00:22:39.334 "uuid": "1b8c3b0a-e846-41a7-9901-5abeff86302d", 00:22:39.334 "numa_id": -1, 00:22:39.334 "assigned_rate_limits": { 00:22:39.334 "rw_ios_per_sec": 0, 00:22:39.334 "rw_mbytes_per_sec": 0, 00:22:39.334 "r_mbytes_per_sec": 0, 00:22:39.334 "w_mbytes_per_sec": 0 00:22:39.334 }, 00:22:39.334 "claimed": true, 00:22:39.334 "claim_type": "read_many_write_one", 00:22:39.334 "zoned": false, 00:22:39.334 "supported_io_types": { 00:22:39.334 "read": true, 00:22:39.334 "write": true, 00:22:39.334 "unmap": true, 00:22:39.334 "flush": true, 00:22:39.334 "reset": true, 00:22:39.334 "nvme_admin": true, 00:22:39.334 "nvme_io": true, 00:22:39.334 "nvme_io_md": false, 00:22:39.334 "write_zeroes": true, 00:22:39.334 "zcopy": false, 00:22:39.334 "get_zone_info": false, 00:22:39.334 "zone_management": false, 00:22:39.334 "zone_append": false, 00:22:39.334 "compare": true, 00:22:39.334 "compare_and_write": false, 00:22:39.334 "abort": true, 00:22:39.334 "seek_hole": false, 00:22:39.334 "seek_data": false, 00:22:39.334 "copy": true, 00:22:39.334 "nvme_iov_md": false 00:22:39.334 }, 00:22:39.334 "driver_specific": { 00:22:39.334 "nvme": [ 00:22:39.334 { 00:22:39.334 "pci_address": "0000:00:11.0", 00:22:39.334 "trid": { 00:22:39.334 "trtype": "PCIe", 00:22:39.334 "traddr": "0000:00:11.0" 00:22:39.334 }, 00:22:39.334 "ctrlr_data": { 00:22:39.334 "cntlid": 0, 00:22:39.334 "vendor_id": "0x1b36", 00:22:39.334 "model_number": "QEMU NVMe Ctrl", 00:22:39.334 "serial_number": "12341", 00:22:39.334 "firmware_revision": "8.0.0", 00:22:39.334 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:39.334 "oacs": { 00:22:39.334 "security": 0, 00:22:39.334 "format": 1, 00:22:39.334 "firmware": 0, 00:22:39.334 "ns_manage": 1 00:22:39.334 }, 00:22:39.334 "multi_ctrlr": false, 00:22:39.334 "ana_reporting": false 00:22:39.334 }, 00:22:39.334 "vs": { 00:22:39.334 "nvme_version": "1.4" 00:22:39.334 }, 00:22:39.334 "ns_data": { 00:22:39.334 "id": 1, 00:22:39.334 "can_share": false 00:22:39.334 } 00:22:39.334 } 00:22:39.334 ], 00:22:39.334 "mp_policy": "active_passive" 00:22:39.334 } 00:22:39.334 } 00:22:39.334 ]' 00:22:39.334 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:39.334 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:39.335 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:39.335 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:39.335 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:39.335 00:06:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:39.335 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:39.335 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:39.335 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:39.335 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:39.335 00:06:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:39.596 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=0ae7e3db-c8a3-4cf0-9470-faa269303142 00:22:39.596 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:39.596 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ae7e3db-c8a3-4cf0-9470-faa269303142 00:22:39.857 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:39.857 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a6c9732a-38e9-401f-9a5a-013e5a5549db 00:22:39.857 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a6c9732a-38e9-401f-9a5a-013e5a5549db 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:40.118 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:40.380 { 00:22:40.380 "name": "995339bc-9e66-45cc-8b60-fcf6ac19e3c2", 00:22:40.380 "aliases": [ 00:22:40.380 "lvs/nvme0n1p0" 00:22:40.380 ], 00:22:40.380 "product_name": "Logical Volume", 00:22:40.380 "block_size": 4096, 00:22:40.380 "num_blocks": 26476544, 00:22:40.380 "uuid": "995339bc-9e66-45cc-8b60-fcf6ac19e3c2", 00:22:40.380 "assigned_rate_limits": { 00:22:40.380 "rw_ios_per_sec": 0, 00:22:40.380 "rw_mbytes_per_sec": 0, 00:22:40.380 "r_mbytes_per_sec": 0, 00:22:40.380 "w_mbytes_per_sec": 0 00:22:40.380 }, 00:22:40.380 "claimed": false, 00:22:40.380 "zoned": false, 00:22:40.380 "supported_io_types": { 00:22:40.380 "read": true, 00:22:40.380 "write": true, 00:22:40.380 "unmap": true, 00:22:40.380 "flush": false, 00:22:40.380 "reset": true, 00:22:40.380 "nvme_admin": false, 00:22:40.380 "nvme_io": false, 00:22:40.380 "nvme_io_md": false, 00:22:40.380 "write_zeroes": true, 00:22:40.380 "zcopy": false, 00:22:40.380 "get_zone_info": false, 00:22:40.380 "zone_management": false, 00:22:40.380 "zone_append": false, 00:22:40.380 "compare": false, 00:22:40.380 "compare_and_write": false, 00:22:40.380 "abort": false, 00:22:40.380 "seek_hole": true, 00:22:40.380 "seek_data": true, 00:22:40.380 "copy": false, 00:22:40.380 "nvme_iov_md": false 00:22:40.380 }, 00:22:40.380 "driver_specific": { 00:22:40.380 "lvol": { 00:22:40.380 "lvol_store_uuid": "a6c9732a-38e9-401f-9a5a-013e5a5549db", 00:22:40.380 "base_bdev": "nvme0n1", 00:22:40.380 "thin_provision": true, 00:22:40.380 "num_allocated_clusters": 0, 00:22:40.380 "snapshot": false, 00:22:40.380 "clone": false, 00:22:40.380 "esnap_clone": false 00:22:40.380 } 00:22:40.380 } 00:22:40.380 } 00:22:40.380 ]' 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:40.380 00:06:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:40.641 00:06:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:40.641 00:06:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:40.641 00:06:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:40.641 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:40.641 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:40.641 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:40.641 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:40.641 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:40.902 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:40.902 { 00:22:40.902 "name": "995339bc-9e66-45cc-8b60-fcf6ac19e3c2", 00:22:40.902 "aliases": [ 00:22:40.902 "lvs/nvme0n1p0" 00:22:40.902 ], 00:22:40.902 "product_name": "Logical Volume", 00:22:40.902 "block_size": 4096, 00:22:40.902 "num_blocks": 26476544, 00:22:40.902 "uuid": "995339bc-9e66-45cc-8b60-fcf6ac19e3c2", 00:22:40.902 "assigned_rate_limits": { 00:22:40.902 "rw_ios_per_sec": 0, 00:22:40.902 "rw_mbytes_per_sec": 0, 00:22:40.902 "r_mbytes_per_sec": 0, 00:22:40.902 "w_mbytes_per_sec": 0 00:22:40.902 }, 00:22:40.902 "claimed": false, 00:22:40.902 "zoned": false, 00:22:40.902 "supported_io_types": { 00:22:40.902 "read": true, 00:22:40.902 "write": true, 00:22:40.902 "unmap": true, 00:22:40.902 "flush": false, 00:22:40.902 "reset": true, 00:22:40.902 "nvme_admin": false, 00:22:40.902 "nvme_io": false, 00:22:40.902 "nvme_io_md": false, 00:22:40.902 "write_zeroes": true, 00:22:40.902 "zcopy": false, 00:22:40.902 "get_zone_info": false, 00:22:40.902 "zone_management": false, 00:22:40.902 "zone_append": false, 00:22:40.902 "compare": false, 00:22:40.902 "compare_and_write": false, 00:22:40.902 "abort": false, 00:22:40.902 "seek_hole": true, 00:22:40.902 "seek_data": true, 00:22:40.902 "copy": false, 00:22:40.902 "nvme_iov_md": false 00:22:40.902 }, 00:22:40.902 "driver_specific": { 00:22:40.902 "lvol": { 00:22:40.902 "lvol_store_uuid": "a6c9732a-38e9-401f-9a5a-013e5a5549db", 00:22:40.902 "base_bdev": "nvme0n1", 00:22:40.902 "thin_provision": true, 00:22:40.902 "num_allocated_clusters": 0, 00:22:40.902 "snapshot": false, 00:22:40.902 "clone": false, 00:22:40.902 "esnap_clone": false 00:22:40.902 } 00:22:40.902 } 00:22:40.902 } 00:22:40.902 ]' 00:22:40.902 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:40.902 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:40.902 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:40.902 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:40.902 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:40.902 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:40.902 00:06:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:40.902 00:06:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:41.163 00:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:41.163 00:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:41.163 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:41.163 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:41.163 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:41.163 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:41.163 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 995339bc-9e66-45cc-8b60-fcf6ac19e3c2 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:41.424 { 00:22:41.424 "name": "995339bc-9e66-45cc-8b60-fcf6ac19e3c2", 00:22:41.424 "aliases": [ 00:22:41.424 "lvs/nvme0n1p0" 00:22:41.424 ], 00:22:41.424 "product_name": "Logical Volume", 00:22:41.424 "block_size": 4096, 00:22:41.424 "num_blocks": 26476544, 00:22:41.424 "uuid": "995339bc-9e66-45cc-8b60-fcf6ac19e3c2", 00:22:41.424 "assigned_rate_limits": { 00:22:41.424 "rw_ios_per_sec": 0, 00:22:41.424 "rw_mbytes_per_sec": 0, 00:22:41.424 "r_mbytes_per_sec": 0, 00:22:41.424 "w_mbytes_per_sec": 0 00:22:41.424 }, 00:22:41.424 "claimed": false, 00:22:41.424 "zoned": false, 00:22:41.424 "supported_io_types": { 00:22:41.424 "read": true, 00:22:41.424 "write": true, 00:22:41.424 "unmap": true, 00:22:41.424 "flush": false, 00:22:41.424 "reset": true, 00:22:41.424 "nvme_admin": false, 00:22:41.424 "nvme_io": false, 00:22:41.424 "nvme_io_md": false, 00:22:41.424 "write_zeroes": true, 00:22:41.424 "zcopy": false, 00:22:41.424 "get_zone_info": false, 00:22:41.424 "zone_management": false, 00:22:41.424 "zone_append": false, 00:22:41.424 "compare": false, 00:22:41.424 "compare_and_write": false, 00:22:41.424 "abort": false, 00:22:41.424 "seek_hole": true, 00:22:41.424 "seek_data": true, 00:22:41.424 "copy": false, 00:22:41.424 "nvme_iov_md": false 00:22:41.424 }, 00:22:41.424 "driver_specific": { 00:22:41.424 "lvol": { 00:22:41.424 "lvol_store_uuid": "a6c9732a-38e9-401f-9a5a-013e5a5549db", 00:22:41.424 "base_bdev": "nvme0n1", 00:22:41.424 "thin_provision": true, 00:22:41.424 "num_allocated_clusters": 0, 00:22:41.424 "snapshot": false, 00:22:41.424 "clone": false, 00:22:41.424 "esnap_clone": false 00:22:41.424 } 00:22:41.424 } 00:22:41.424 } 00:22:41.424 ]' 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 995339bc-9e66-45cc-8b60-fcf6ac19e3c2 --l2p_dram_limit 10' 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:41.424 00:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 995339bc-9e66-45cc-8b60-fcf6ac19e3c2 --l2p_dram_limit 10 -c nvc0n1p0 00:22:41.687 [2024-11-19 00:06:48.149637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.687 [2024-11-19 00:06:48.149674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:41.687 [2024-11-19 00:06:48.149687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:41.687 [2024-11-19 00:06:48.149694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.687 [2024-11-19 00:06:48.149737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.687 [2024-11-19 00:06:48.149745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:41.687 [2024-11-19 00:06:48.149753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:41.687 [2024-11-19 00:06:48.149759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.687 [2024-11-19 00:06:48.149777] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:41.687 [2024-11-19 00:06:48.150381] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:41.687 [2024-11-19 00:06:48.150399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.687 [2024-11-19 00:06:48.150405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:41.687 [2024-11-19 00:06:48.150413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:22:41.687 [2024-11-19 00:06:48.150420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.687 [2024-11-19 00:06:48.150461] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b3ee79cd-d3e4-452b-b6fc-dedb8c05032a 00:22:41.687 [2024-11-19 00:06:48.151452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.687 [2024-11-19 00:06:48.151560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:41.687 [2024-11-19 00:06:48.151573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:41.687 [2024-11-19 00:06:48.151581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.687 [2024-11-19 00:06:48.156408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.687 [2024-11-19 00:06:48.156439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:41.687 [2024-11-19 00:06:48.156449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.761 ms 00:22:41.687 [2024-11-19 00:06:48.156456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.687 [2024-11-19 00:06:48.156525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.687 [2024-11-19 00:06:48.156534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:41.687 [2024-11-19 00:06:48.156540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:41.687 [2024-11-19 00:06:48.156549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.687 [2024-11-19 00:06:48.156597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.687 [2024-11-19 00:06:48.156607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:41.687 [2024-11-19 00:06:48.156613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:41.687 [2024-11-19 00:06:48.156622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.687 [2024-11-19 00:06:48.156639] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:41.687 [2024-11-19 00:06:48.159535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.687 [2024-11-19 00:06:48.159568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:41.687 [2024-11-19 00:06:48.159578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:22:41.687 [2024-11-19 00:06:48.159584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.687 [2024-11-19 00:06:48.159610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.687 [2024-11-19 00:06:48.159617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:41.687 [2024-11-19 00:06:48.159624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:41.687 [2024-11-19 00:06:48.159630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.687 [2024-11-19 00:06:48.159644] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:41.687 [2024-11-19 00:06:48.159747] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:41.687 [2024-11-19 00:06:48.159759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:41.687 [2024-11-19 00:06:48.159768] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:41.687 [2024-11-19 00:06:48.159777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:41.687 [2024-11-19 00:06:48.159784] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:41.687 [2024-11-19 00:06:48.159792] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:41.687 [2024-11-19 00:06:48.159797] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:41.687 [2024-11-19 00:06:48.159806] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:41.688 [2024-11-19 00:06:48.159811] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:41.688 [2024-11-19 00:06:48.159818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.688 [2024-11-19 00:06:48.159824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:41.688 [2024-11-19 00:06:48.159831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:22:41.688 [2024-11-19 00:06:48.159842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.688 [2024-11-19 00:06:48.159907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.688 [2024-11-19 00:06:48.159914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:41.688 [2024-11-19 00:06:48.159922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:41.688 [2024-11-19 00:06:48.159927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.688 [2024-11-19 00:06:48.160006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:41.688 [2024-11-19 00:06:48.160014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:41.688 [2024-11-19 00:06:48.160021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:41.688 [2024-11-19 00:06:48.160027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:41.688 [2024-11-19 00:06:48.160039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:41.688 [2024-11-19 00:06:48.160051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:41.688 [2024-11-19 00:06:48.160058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:41.688 [2024-11-19 00:06:48.160071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:41.688 [2024-11-19 00:06:48.160077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:41.688 [2024-11-19 00:06:48.160084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:41.688 [2024-11-19 00:06:48.160089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:41.688 [2024-11-19 00:06:48.160096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:41.688 [2024-11-19 00:06:48.160102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:41.688 [2024-11-19 00:06:48.160115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:41.688 [2024-11-19 00:06:48.160138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:41.688 [2024-11-19 00:06:48.160151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.688 [2024-11-19 00:06:48.160162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:41.688 [2024-11-19 00:06:48.160167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.688 [2024-11-19 00:06:48.160179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:41.688 [2024-11-19 00:06:48.160186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.688 [2024-11-19 00:06:48.160198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:41.688 [2024-11-19 00:06:48.160203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.688 [2024-11-19 00:06:48.160215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:41.688 [2024-11-19 00:06:48.160223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:41.688 [2024-11-19 00:06:48.160234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:41.688 [2024-11-19 00:06:48.160239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:41.688 [2024-11-19 00:06:48.160245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:41.688 [2024-11-19 00:06:48.160250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:41.688 [2024-11-19 00:06:48.160256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:41.688 [2024-11-19 00:06:48.160262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:41.688 [2024-11-19 00:06:48.160273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:41.688 [2024-11-19 00:06:48.160280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160285] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:41.688 [2024-11-19 00:06:48.160292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:41.688 [2024-11-19 00:06:48.160298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:41.688 [2024-11-19 00:06:48.160305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.688 [2024-11-19 00:06:48.160311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:41.688 [2024-11-19 00:06:48.160319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:41.688 [2024-11-19 00:06:48.160324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:41.688 [2024-11-19 00:06:48.160330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:41.688 [2024-11-19 00:06:48.160335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:41.688 [2024-11-19 00:06:48.160342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:41.688 [2024-11-19 00:06:48.160349] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:41.688 [2024-11-19 00:06:48.160358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.688 [2024-11-19 00:06:48.160366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:41.688 [2024-11-19 00:06:48.160374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:41.688 [2024-11-19 00:06:48.160379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:41.688 [2024-11-19 00:06:48.160386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:41.688 [2024-11-19 00:06:48.160392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:41.688 [2024-11-19 00:06:48.160398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:41.688 [2024-11-19 00:06:48.160404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:41.688 [2024-11-19 00:06:48.160411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:41.688 [2024-11-19 00:06:48.160416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:41.688 [2024-11-19 00:06:48.160424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:41.688 [2024-11-19 00:06:48.160430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:41.688 [2024-11-19 00:06:48.160436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:41.688 [2024-11-19 00:06:48.160441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:41.688 [2024-11-19 00:06:48.160449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:41.688 [2024-11-19 00:06:48.160455] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:41.688 [2024-11-19 00:06:48.160462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.688 [2024-11-19 00:06:48.160468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:41.688 [2024-11-19 00:06:48.160475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:41.688 [2024-11-19 00:06:48.160480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:41.688 [2024-11-19 00:06:48.160487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:41.688 [2024-11-19 00:06:48.160494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.688 [2024-11-19 00:06:48.160501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:41.688 [2024-11-19 00:06:48.160506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:22:41.688 [2024-11-19 00:06:48.160512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.688 [2024-11-19 00:06:48.160551] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:41.688 [2024-11-19 00:06:48.160563] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:45.901 [2024-11-19 00:06:51.849617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:51.849702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:45.901 [2024-11-19 00:06:51.849722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3689.047 ms 00:22:45.901 [2024-11-19 00:06:51.849733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:51.882297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:51.882362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.901 [2024-11-19 00:06:51.882377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.312 ms 00:22:45.901 [2024-11-19 00:06:51.882389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:51.882530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:51.882546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:45.901 [2024-11-19 00:06:51.882556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:45.901 [2024-11-19 00:06:51.882571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:51.918481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:51.918533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.901 [2024-11-19 00:06:51.918546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.858 ms 00:22:45.901 [2024-11-19 00:06:51.918559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:51.918595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:51.918610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.901 [2024-11-19 00:06:51.918619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:45.901 [2024-11-19 00:06:51.918630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:51.919246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:51.919276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.901 [2024-11-19 00:06:51.919290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:22:45.901 [2024-11-19 00:06:51.919301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:51.919421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:51.919436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.901 [2024-11-19 00:06:51.919449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:45.901 [2024-11-19 00:06:51.919463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:51.937277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:51.937604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.901 [2024-11-19 00:06:51.937626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.794 ms 00:22:45.901 [2024-11-19 00:06:51.937637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:51.951150] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:45.901 [2024-11-19 00:06:51.955069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:51.955116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.901 [2024-11-19 00:06:51.955155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.332 ms 00:22:45.901 [2024-11-19 00:06:51.955164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:52.079542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:52.079621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:45.901 [2024-11-19 00:06:52.079643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.337 ms 00:22:45.901 [2024-11-19 00:06:52.079653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:52.079870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:52.079893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.901 [2024-11-19 00:06:52.079909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:22:45.901 [2024-11-19 00:06:52.079918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:52.106131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:52.106184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:45.901 [2024-11-19 00:06:52.106201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.141 ms 00:22:45.901 [2024-11-19 00:06:52.106211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:52.131816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:52.132115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:45.901 [2024-11-19 00:06:52.132160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.540 ms 00:22:45.901 [2024-11-19 00:06:52.132169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:52.132881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:52.132936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:45.901 [2024-11-19 00:06:52.132950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:22:45.901 [2024-11-19 00:06:52.132960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:52.221617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:52.221669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:45.901 [2024-11-19 00:06:52.221688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.588 ms 00:22:45.901 [2024-11-19 00:06:52.221698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:52.249942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:52.249994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:45.901 [2024-11-19 00:06:52.250010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.141 ms 00:22:45.901 [2024-11-19 00:06:52.250020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.901 [2024-11-19 00:06:52.276269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.901 [2024-11-19 00:06:52.276318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:45.902 [2024-11-19 00:06:52.276334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.193 ms 00:22:45.902 [2024-11-19 00:06:52.276342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.902 [2024-11-19 00:06:52.302930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.902 [2024-11-19 00:06:52.302980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.902 [2024-11-19 00:06:52.302996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.530 ms 00:22:45.902 [2024-11-19 00:06:52.303005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.902 [2024-11-19 00:06:52.303064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.902 [2024-11-19 00:06:52.303075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.902 [2024-11-19 00:06:52.303091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:45.902 [2024-11-19 00:06:52.303099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.902 [2024-11-19 00:06:52.303219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.902 [2024-11-19 00:06:52.303233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.902 [2024-11-19 00:06:52.303247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:45.902 [2024-11-19 00:06:52.303256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.902 [2024-11-19 00:06:52.304626] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4154.444 ms, result 0 00:22:45.902 { 00:22:45.902 "name": "ftl0", 00:22:45.902 "uuid": "b3ee79cd-d3e4-452b-b6fc-dedb8c05032a" 00:22:45.902 } 00:22:45.902 00:06:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:45.902 00:06:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:45.902 00:06:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:45.902 00:06:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:45.902 00:06:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:46.164 /dev/nbd0 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:46.164 1+0 records in 00:22:46.164 1+0 records out 00:22:46.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362182 s, 11.3 MB/s 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:22:46.164 00:06:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:46.424 [2024-11-19 00:06:52.896740] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:46.424 [2024-11-19 00:06:52.897105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77524 ] 00:22:46.424 [2024-11-19 00:06:53.065835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.686 [2024-11-19 00:06:53.183885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.073  [2024-11-19T00:06:55.708Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-19T00:06:56.652Z] Copying: 423/1024 [MB] (234 MBps) [2024-11-19T00:06:57.595Z] Copying: 617/1024 [MB] (194 MBps) [2024-11-19T00:06:58.537Z] Copying: 809/1024 [MB] (191 MBps) [2024-11-19T00:06:58.537Z] Copying: 1008/1024 [MB] (198 MBps) [2024-11-19T00:06:59.108Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:22:52.416 00:22:52.692 00:06:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:54.606 00:07:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:54.606 [2024-11-19 00:07:01.214271] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:54.606 [2024-11-19 00:07:01.214387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77618 ] 00:22:54.867 [2024-11-19 00:07:01.375853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.867 [2024-11-19 00:07:01.493316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.246  [2024-11-19T00:07:03.873Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-19T00:07:04.807Z] Copying: 43/1024 [MB] (16 MBps) [2024-11-19T00:07:05.738Z] Copying: 57/1024 [MB] (14 MBps) [2024-11-19T00:07:07.112Z] Copying: 79/1024 [MB] (21 MBps) [2024-11-19T00:07:08.045Z] Copying: 96/1024 [MB] (16 MBps) [2024-11-19T00:07:08.977Z] Copying: 113/1024 [MB] (17 MBps) [2024-11-19T00:07:09.910Z] Copying: 134/1024 [MB] (20 MBps) [2024-11-19T00:07:10.843Z] Copying: 153/1024 [MB] (19 MBps) [2024-11-19T00:07:11.777Z] Copying: 174/1024 [MB] (20 MBps) [2024-11-19T00:07:13.163Z] Copying: 202/1024 [MB] (28 MBps) [2024-11-19T00:07:13.782Z] Copying: 220/1024 [MB] (17 MBps) [2024-11-19T00:07:15.156Z] Copying: 237/1024 [MB] (17 MBps) [2024-11-19T00:07:16.090Z] Copying: 267/1024 [MB] (29 MBps) [2024-11-19T00:07:17.024Z] Copying: 283/1024 [MB] (16 MBps) [2024-11-19T00:07:17.958Z] Copying: 303/1024 [MB] (19 MBps) [2024-11-19T00:07:18.893Z] Copying: 319/1024 [MB] (16 MBps) [2024-11-19T00:07:19.826Z] Copying: 339/1024 [MB] (19 MBps) [2024-11-19T00:07:20.762Z] Copying: 353/1024 [MB] (13 MBps) [2024-11-19T00:07:22.134Z] Copying: 373/1024 [MB] (20 MBps) [2024-11-19T00:07:23.069Z] Copying: 397/1024 [MB] (24 MBps) [2024-11-19T00:07:24.002Z] Copying: 418/1024 [MB] (20 MBps) [2024-11-19T00:07:24.946Z] Copying: 437/1024 [MB] (19 MBps) [2024-11-19T00:07:25.888Z] Copying: 456/1024 [MB] (18 MBps) [2024-11-19T00:07:26.822Z] Copying: 490/1024 [MB] (33 MBps) [2024-11-19T00:07:27.756Z] Copying: 518/1024 [MB] (28 MBps) [2024-11-19T00:07:29.131Z] Copying: 531/1024 [MB] (12 MBps) [2024-11-19T00:07:30.067Z] Copying: 553/1024 [MB] (22 MBps) [2024-11-19T00:07:31.002Z] Copying: 585/1024 [MB] (32 MBps) [2024-11-19T00:07:32.082Z] Copying: 611/1024 [MB] (25 MBps) [2024-11-19T00:07:33.017Z] Copying: 623/1024 [MB] (11 MBps) [2024-11-19T00:07:33.953Z] Copying: 638/1024 [MB] (14 MBps) [2024-11-19T00:07:34.887Z] Copying: 654/1024 [MB] (16 MBps) [2024-11-19T00:07:35.822Z] Copying: 671/1024 [MB] (17 MBps) [2024-11-19T00:07:36.756Z] Copying: 686/1024 [MB] (14 MBps) [2024-11-19T00:07:38.131Z] Copying: 700/1024 [MB] (14 MBps) [2024-11-19T00:07:39.068Z] Copying: 713/1024 [MB] (12 MBps) [2024-11-19T00:07:40.006Z] Copying: 729/1024 [MB] (16 MBps) [2024-11-19T00:07:40.940Z] Copying: 753/1024 [MB] (23 MBps) [2024-11-19T00:07:41.873Z] Copying: 772/1024 [MB] (19 MBps) [2024-11-19T00:07:42.808Z] Copying: 787/1024 [MB] (14 MBps) [2024-11-19T00:07:43.741Z] Copying: 804/1024 [MB] (17 MBps) [2024-11-19T00:07:45.115Z] Copying: 824/1024 [MB] (19 MBps) [2024-11-19T00:07:46.049Z] Copying: 844/1024 [MB] (20 MBps) [2024-11-19T00:07:46.982Z] Copying: 860/1024 [MB] (16 MBps) [2024-11-19T00:07:47.916Z] Copying: 881/1024 [MB] (20 MBps) [2024-11-19T00:07:48.850Z] Copying: 914/1024 [MB] (33 MBps) [2024-11-19T00:07:49.783Z] Copying: 948/1024 [MB] (33 MBps) [2024-11-19T00:07:51.157Z] Copying: 971/1024 [MB] (22 MBps) [2024-11-19T00:07:52.090Z] Copying: 989/1024 [MB] (18 MBps) [2024-11-19T00:07:52.657Z] Copying: 1011/1024 [MB] (21 MBps) [2024-11-19T00:07:53.227Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:23:46.535 00:23:46.535 00:07:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:46.535 00:07:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:46.798 00:07:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:46.798 [2024-11-19 00:07:53.430172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.798 [2024-11-19 00:07:53.430203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:46.798 [2024-11-19 00:07:53.430214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:46.798 [2024-11-19 00:07:53.430221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.798 [2024-11-19 00:07:53.430239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:46.798 [2024-11-19 00:07:53.432352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.798 [2024-11-19 00:07:53.432368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:46.798 [2024-11-19 00:07:53.432379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:23:46.798 [2024-11-19 00:07:53.432385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.798 [2024-11-19 00:07:53.435313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.798 [2024-11-19 00:07:53.435339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:46.798 [2024-11-19 00:07:53.435348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:23:46.798 [2024-11-19 00:07:53.435355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.798 [2024-11-19 00:07:53.451166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.798 [2024-11-19 00:07:53.451196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:46.798 [2024-11-19 00:07:53.451207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.795 ms 00:23:46.798 [2024-11-19 00:07:53.451213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.798 [2024-11-19 00:07:53.455973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.798 [2024-11-19 00:07:53.455995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:46.798 [2024-11-19 00:07:53.456005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.731 ms 00:23:46.798 [2024-11-19 00:07:53.456012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.798 [2024-11-19 00:07:53.475020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.798 [2024-11-19 00:07:53.475164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:46.798 [2024-11-19 00:07:53.475179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.958 ms 00:23:46.798 [2024-11-19 00:07:53.475185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.063 [2024-11-19 00:07:53.488294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.063 [2024-11-19 00:07:53.488320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:47.063 [2024-11-19 00:07:53.488332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.078 ms 00:23:47.063 [2024-11-19 00:07:53.488341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.063 [2024-11-19 00:07:53.488454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.063 [2024-11-19 00:07:53.488463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:47.063 [2024-11-19 00:07:53.488471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:47.064 [2024-11-19 00:07:53.488477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.064 [2024-11-19 00:07:53.506589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.064 [2024-11-19 00:07:53.506701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:47.064 [2024-11-19 00:07:53.506716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.097 ms 00:23:47.064 [2024-11-19 00:07:53.506722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.064 [2024-11-19 00:07:53.525189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.064 [2024-11-19 00:07:53.525288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:47.064 [2024-11-19 00:07:53.525302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.440 ms 00:23:47.064 [2024-11-19 00:07:53.525308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.064 [2024-11-19 00:07:53.543030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.064 [2024-11-19 00:07:53.543053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:47.064 [2024-11-19 00:07:53.543062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.693 ms 00:23:47.064 [2024-11-19 00:07:53.543068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.064 [2024-11-19 00:07:53.559927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.064 [2024-11-19 00:07:53.559950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:47.064 [2024-11-19 00:07:53.559959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.803 ms 00:23:47.064 [2024-11-19 00:07:53.559965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.064 [2024-11-19 00:07:53.559993] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:47.064 [2024-11-19 00:07:53.560004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:47.064 [2024-11-19 00:07:53.560531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:47.065 [2024-11-19 00:07:53.560723] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:47.065 [2024-11-19 00:07:53.560730] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b3ee79cd-d3e4-452b-b6fc-dedb8c05032a 00:23:47.065 [2024-11-19 00:07:53.560736] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:47.065 [2024-11-19 00:07:53.560743] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:47.065 [2024-11-19 00:07:53.560749] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:47.065 [2024-11-19 00:07:53.560757] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:47.065 [2024-11-19 00:07:53.560763] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:47.065 [2024-11-19 00:07:53.560770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:47.065 [2024-11-19 00:07:53.560776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:47.065 [2024-11-19 00:07:53.560782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:47.065 [2024-11-19 00:07:53.560787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:47.065 [2024-11-19 00:07:53.560793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.065 [2024-11-19 00:07:53.560799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:47.065 [2024-11-19 00:07:53.560806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:23:47.065 [2024-11-19 00:07:53.560812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.570405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.065 [2024-11-19 00:07:53.570428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:47.065 [2024-11-19 00:07:53.570452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.569 ms 00:23:47.065 [2024-11-19 00:07:53.570458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.570725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.065 [2024-11-19 00:07:53.570732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:47.065 [2024-11-19 00:07:53.570740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:23:47.065 [2024-11-19 00:07:53.570746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.603418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.603445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:47.065 [2024-11-19 00:07:53.603455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.603461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.603504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.603510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:47.065 [2024-11-19 00:07:53.603517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.603523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.603571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.603578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:47.065 [2024-11-19 00:07:53.603587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.603594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.603610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.603616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:47.065 [2024-11-19 00:07:53.603624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.603629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.662538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.662694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:47.065 [2024-11-19 00:07:53.662709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.662715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.710634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.710663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:47.065 [2024-11-19 00:07:53.710673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.710679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.710765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.710772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:47.065 [2024-11-19 00:07:53.710780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.710787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.710826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.710833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:47.065 [2024-11-19 00:07:53.710841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.710847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.710917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.710926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:47.065 [2024-11-19 00:07:53.710933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.710939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.710971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.710978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:47.065 [2024-11-19 00:07:53.710985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.710991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.711021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.711028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:47.065 [2024-11-19 00:07:53.711035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.711041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.711079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.065 [2024-11-19 00:07:53.711087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:47.065 [2024-11-19 00:07:53.711094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.065 [2024-11-19 00:07:53.711100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.065 [2024-11-19 00:07:53.711221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 281.020 ms, result 0 00:23:47.065 true 00:23:47.065 00:07:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77382 00:23:47.065 00:07:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77382 00:23:47.065 00:07:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:47.326 [2024-11-19 00:07:53.801428] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:47.326 [2024-11-19 00:07:53.801546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78163 ] 00:23:47.326 [2024-11-19 00:07:53.957512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.587 [2024-11-19 00:07:54.040167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.531  [2024-11-19T00:07:56.611Z] Copying: 258/1024 [MB] (258 MBps) [2024-11-19T00:07:57.555Z] Copying: 519/1024 [MB] (261 MBps) [2024-11-19T00:07:58.500Z] Copying: 778/1024 [MB] (258 MBps) [2024-11-19T00:07:58.761Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:23:52.069 00:23:52.069 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77382 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:52.069 00:07:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:52.331 [2024-11-19 00:07:58.801569] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:52.331 [2024-11-19 00:07:58.801679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78221 ] 00:23:52.331 [2024-11-19 00:07:58.958154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.593 [2024-11-19 00:07:59.036216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.593 [2024-11-19 00:07:59.241612] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:52.593 [2024-11-19 00:07:59.241662] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:52.854 [2024-11-19 00:07:59.305414] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:52.854 [2024-11-19 00:07:59.306158] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:52.854 [2024-11-19 00:07:59.306668] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:53.117 [2024-11-19 00:07:59.738191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.738250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:53.117 [2024-11-19 00:07:59.738266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:53.117 [2024-11-19 00:07:59.738275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.738347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.738358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.117 [2024-11-19 00:07:59.738367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:53.117 [2024-11-19 00:07:59.738375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.738395] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:53.117 [2024-11-19 00:07:59.739098] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:53.117 [2024-11-19 00:07:59.739147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.739158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.117 [2024-11-19 00:07:59.739168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:23:53.117 [2024-11-19 00:07:59.739177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.740869] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:53.117 [2024-11-19 00:07:59.755684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.755742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:53.117 [2024-11-19 00:07:59.755758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.816 ms 00:23:53.117 [2024-11-19 00:07:59.755766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.755845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.755856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:53.117 [2024-11-19 00:07:59.755865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:53.117 [2024-11-19 00:07:59.755873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.764036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.764259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.117 [2024-11-19 00:07:59.764279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.079 ms 00:23:53.117 [2024-11-19 00:07:59.764288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.764377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.764387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.117 [2024-11-19 00:07:59.764395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:53.117 [2024-11-19 00:07:59.764403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.764450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.764466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:53.117 [2024-11-19 00:07:59.764474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:53.117 [2024-11-19 00:07:59.764482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.764505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:53.117 [2024-11-19 00:07:59.768390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.768429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.117 [2024-11-19 00:07:59.768441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.891 ms 00:23:53.117 [2024-11-19 00:07:59.768449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.768490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.768498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:53.117 [2024-11-19 00:07:59.768507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:53.117 [2024-11-19 00:07:59.768515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.768571] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:53.117 [2024-11-19 00:07:59.768597] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:53.117 [2024-11-19 00:07:59.768634] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:53.117 [2024-11-19 00:07:59.768653] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:53.117 [2024-11-19 00:07:59.768760] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:53.117 [2024-11-19 00:07:59.768773] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:53.117 [2024-11-19 00:07:59.768785] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:53.117 [2024-11-19 00:07:59.768796] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:53.117 [2024-11-19 00:07:59.768808] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:53.117 [2024-11-19 00:07:59.768818] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:53.117 [2024-11-19 00:07:59.768825] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:53.117 [2024-11-19 00:07:59.768835] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:53.117 [2024-11-19 00:07:59.768844] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:53.117 [2024-11-19 00:07:59.768853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.768861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:53.117 [2024-11-19 00:07:59.768870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:23:53.117 [2024-11-19 00:07:59.768877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.768960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.117 [2024-11-19 00:07:59.768975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:53.117 [2024-11-19 00:07:59.768983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:53.117 [2024-11-19 00:07:59.768991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.117 [2024-11-19 00:07:59.769097] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:53.117 [2024-11-19 00:07:59.769110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:53.117 [2024-11-19 00:07:59.769142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:53.117 [2024-11-19 00:07:59.769151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.117 [2024-11-19 00:07:59.769160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:53.117 [2024-11-19 00:07:59.769167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:53.118 [2024-11-19 00:07:59.769184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:53.118 [2024-11-19 00:07:59.769194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:53.118 [2024-11-19 00:07:59.769210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:53.118 [2024-11-19 00:07:59.769224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:53.118 [2024-11-19 00:07:59.769232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:53.118 [2024-11-19 00:07:59.769244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:53.118 [2024-11-19 00:07:59.769252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:53.118 [2024-11-19 00:07:59.769262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:53.118 [2024-11-19 00:07:59.769277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:53.118 [2024-11-19 00:07:59.769284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:53.118 [2024-11-19 00:07:59.769298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.118 [2024-11-19 00:07:59.769312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:53.118 [2024-11-19 00:07:59.769320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.118 [2024-11-19 00:07:59.769334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:53.118 [2024-11-19 00:07:59.769341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.118 [2024-11-19 00:07:59.769355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:53.118 [2024-11-19 00:07:59.769362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.118 [2024-11-19 00:07:59.769377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:53.118 [2024-11-19 00:07:59.769384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:53.118 [2024-11-19 00:07:59.769397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:53.118 [2024-11-19 00:07:59.769403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:53.118 [2024-11-19 00:07:59.769409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:53.118 [2024-11-19 00:07:59.769418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:53.118 [2024-11-19 00:07:59.769425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:53.118 [2024-11-19 00:07:59.769431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:53.118 [2024-11-19 00:07:59.769444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:53.118 [2024-11-19 00:07:59.769450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769456] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:53.118 [2024-11-19 00:07:59.769469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:53.118 [2024-11-19 00:07:59.769481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:53.118 [2024-11-19 00:07:59.769491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.118 [2024-11-19 00:07:59.769499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:53.118 [2024-11-19 00:07:59.769506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:53.118 [2024-11-19 00:07:59.769513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:53.118 [2024-11-19 00:07:59.769520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:53.118 [2024-11-19 00:07:59.769526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:53.118 [2024-11-19 00:07:59.769534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:53.118 [2024-11-19 00:07:59.769543] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:53.118 [2024-11-19 00:07:59.769552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:53.118 [2024-11-19 00:07:59.769561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:53.118 [2024-11-19 00:07:59.769568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:53.118 [2024-11-19 00:07:59.769575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:53.118 [2024-11-19 00:07:59.769584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:53.118 [2024-11-19 00:07:59.769592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:53.118 [2024-11-19 00:07:59.769599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:53.118 [2024-11-19 00:07:59.769605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:53.118 [2024-11-19 00:07:59.769612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:53.118 [2024-11-19 00:07:59.769619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:53.118 [2024-11-19 00:07:59.769626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:53.118 [2024-11-19 00:07:59.769633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:53.118 [2024-11-19 00:07:59.769641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:53.118 [2024-11-19 00:07:59.769648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:53.118 [2024-11-19 00:07:59.769656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:53.118 [2024-11-19 00:07:59.769662] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:53.118 [2024-11-19 00:07:59.769671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:53.118 [2024-11-19 00:07:59.769679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:53.118 [2024-11-19 00:07:59.769688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:53.118 [2024-11-19 00:07:59.769694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:53.118 [2024-11-19 00:07:59.769701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:53.118 [2024-11-19 00:07:59.769708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.118 [2024-11-19 00:07:59.769715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:53.118 [2024-11-19 00:07:59.769724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:23:53.118 [2024-11-19 00:07:59.769732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.118 [2024-11-19 00:07:59.802110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.118 [2024-11-19 00:07:59.802171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.118 [2024-11-19 00:07:59.802183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.331 ms 00:23:53.118 [2024-11-19 00:07:59.802192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.118 [2024-11-19 00:07:59.802283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.118 [2024-11-19 00:07:59.802296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:53.118 [2024-11-19 00:07:59.802304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:53.118 [2024-11-19 00:07:59.802312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.859178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.859237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.380 [2024-11-19 00:07:59.859252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.804 ms 00:23:53.380 [2024-11-19 00:07:59.859265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.859318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.859329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.380 [2024-11-19 00:07:59.859339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:53.380 [2024-11-19 00:07:59.859348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.859930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.859970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.380 [2024-11-19 00:07:59.859983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:23:53.380 [2024-11-19 00:07:59.859991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.860188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.860290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.380 [2024-11-19 00:07:59.860303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:23:53.380 [2024-11-19 00:07:59.860312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.876565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.876607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.380 [2024-11-19 00:07:59.876618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.227 ms 00:23:53.380 [2024-11-19 00:07:59.876627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.891716] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:53.380 [2024-11-19 00:07:59.891900] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:53.380 [2024-11-19 00:07:59.891923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.891933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:53.380 [2024-11-19 00:07:59.891943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.180 ms 00:23:53.380 [2024-11-19 00:07:59.891951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.918552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.918732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:53.380 [2024-11-19 00:07:59.918767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.553 ms 00:23:53.380 [2024-11-19 00:07:59.918777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.932492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.932549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:53.380 [2024-11-19 00:07:59.932564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.380 ms 00:23:53.380 [2024-11-19 00:07:59.932573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.946254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.946319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:53.380 [2024-11-19 00:07:59.946333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.626 ms 00:23:53.380 [2024-11-19 00:07:59.946341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:07:59.947040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:07:59.947077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:53.380 [2024-11-19 00:07:59.947089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:23:53.380 [2024-11-19 00:07:59.947097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:08:00.014731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:08:00.014803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:53.380 [2024-11-19 00:08:00.014821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.613 ms 00:23:53.380 [2024-11-19 00:08:00.014830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:08:00.026643] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:53.380 [2024-11-19 00:08:00.030462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:08:00.030604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:53.380 [2024-11-19 00:08:00.030665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.556 ms 00:23:53.380 [2024-11-19 00:08:00.030689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:08:00.030822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:08:00.030850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:53.380 [2024-11-19 00:08:00.030872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:53.380 [2024-11-19 00:08:00.030892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:08:00.030982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:08:00.031041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:53.380 [2024-11-19 00:08:00.031062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:53.380 [2024-11-19 00:08:00.031082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.380 [2024-11-19 00:08:00.031118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.380 [2024-11-19 00:08:00.031164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:53.381 [2024-11-19 00:08:00.031241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:53.381 [2024-11-19 00:08:00.031266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.381 [2024-11-19 00:08:00.031322] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:53.381 [2024-11-19 00:08:00.031386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.381 [2024-11-19 00:08:00.031446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:53.381 [2024-11-19 00:08:00.031470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:53.381 [2024-11-19 00:08:00.031489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.381 [2024-11-19 00:08:00.057619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.381 [2024-11-19 00:08:00.057807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:53.381 [2024-11-19 00:08:00.057873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.089 ms 00:23:53.381 [2024-11-19 00:08:00.057897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.381 [2024-11-19 00:08:00.058117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.381 [2024-11-19 00:08:00.058201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:53.381 [2024-11-19 00:08:00.058225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:53.381 [2024-11-19 00:08:00.058246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.381 [2024-11-19 00:08:00.059567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.847 ms, result 0 00:23:54.768  [2024-11-19T00:08:02.402Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-19T00:08:03.346Z] Copying: 43/1024 [MB] (31 MBps) [2024-11-19T00:08:04.291Z] Copying: 68/1024 [MB] (24 MBps) [2024-11-19T00:08:05.237Z] Copying: 96/1024 [MB] (28 MBps) [2024-11-19T00:08:06.182Z] Copying: 116/1024 [MB] (19 MBps) [2024-11-19T00:08:07.129Z] Copying: 136/1024 [MB] (19 MBps) [2024-11-19T00:08:08.075Z] Copying: 163/1024 [MB] (26 MBps) [2024-11-19T00:08:09.466Z] Copying: 173/1024 [MB] (10 MBps) [2024-11-19T00:08:10.410Z] Copying: 183/1024 [MB] (10 MBps) [2024-11-19T00:08:11.357Z] Copying: 199/1024 [MB] (16 MBps) [2024-11-19T00:08:12.349Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-19T00:08:13.295Z] Copying: 220/1024 [MB] (10 MBps) [2024-11-19T00:08:14.239Z] Copying: 235/1024 [MB] (14 MBps) [2024-11-19T00:08:15.184Z] Copying: 248/1024 [MB] (12 MBps) [2024-11-19T00:08:16.129Z] Copying: 271/1024 [MB] (22 MBps) [2024-11-19T00:08:17.074Z] Copying: 285/1024 [MB] (14 MBps) [2024-11-19T00:08:18.461Z] Copying: 297/1024 [MB] (11 MBps) [2024-11-19T00:08:19.404Z] Copying: 316/1024 [MB] (19 MBps) [2024-11-19T00:08:20.346Z] Copying: 342/1024 [MB] (25 MBps) [2024-11-19T00:08:21.290Z] Copying: 374/1024 [MB] (32 MBps) [2024-11-19T00:08:22.232Z] Copying: 398/1024 [MB] (24 MBps) [2024-11-19T00:08:23.176Z] Copying: 416/1024 [MB] (17 MBps) [2024-11-19T00:08:24.120Z] Copying: 447/1024 [MB] (30 MBps) [2024-11-19T00:08:25.506Z] Copying: 466/1024 [MB] (18 MBps) [2024-11-19T00:08:26.078Z] Copying: 486/1024 [MB] (20 MBps) [2024-11-19T00:08:27.465Z] Copying: 500/1024 [MB] (14 MBps) [2024-11-19T00:08:28.409Z] Copying: 519/1024 [MB] (19 MBps) [2024-11-19T00:08:29.353Z] Copying: 544/1024 [MB] (24 MBps) [2024-11-19T00:08:30.296Z] Copying: 570/1024 [MB] (26 MBps) [2024-11-19T00:08:31.240Z] Copying: 593/1024 [MB] (22 MBps) [2024-11-19T00:08:32.185Z] Copying: 608/1024 [MB] (14 MBps) [2024-11-19T00:08:33.130Z] Copying: 619/1024 [MB] (11 MBps) [2024-11-19T00:08:34.075Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-19T00:08:35.463Z] Copying: 656232/1048576 [kB] (10072 kBps) [2024-11-19T00:08:36.405Z] Copying: 654/1024 [MB] (14 MBps) [2024-11-19T00:08:37.350Z] Copying: 671/1024 [MB] (16 MBps) [2024-11-19T00:08:38.294Z] Copying: 682/1024 [MB] (10 MBps) [2024-11-19T00:08:39.238Z] Copying: 692/1024 [MB] (10 MBps) [2024-11-19T00:08:40.238Z] Copying: 709/1024 [MB] (17 MBps) [2024-11-19T00:08:41.181Z] Copying: 720/1024 [MB] (11 MBps) [2024-11-19T00:08:42.123Z] Copying: 731/1024 [MB] (10 MBps) [2024-11-19T00:08:43.510Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-19T00:08:44.083Z] Copying: 753/1024 [MB] (11 MBps) [2024-11-19T00:08:45.469Z] Copying: 768/1024 [MB] (15 MBps) [2024-11-19T00:08:46.415Z] Copying: 782/1024 [MB] (13 MBps) [2024-11-19T00:08:47.359Z] Copying: 797/1024 [MB] (15 MBps) [2024-11-19T00:08:48.304Z] Copying: 811/1024 [MB] (13 MBps) [2024-11-19T00:08:49.247Z] Copying: 821/1024 [MB] (10 MBps) [2024-11-19T00:08:50.190Z] Copying: 832/1024 [MB] (10 MBps) [2024-11-19T00:08:51.133Z] Copying: 842/1024 [MB] (10 MBps) [2024-11-19T00:08:52.076Z] Copying: 868/1024 [MB] (25 MBps) [2024-11-19T00:08:53.463Z] Copying: 887/1024 [MB] (19 MBps) [2024-11-19T00:08:54.407Z] Copying: 900/1024 [MB] (12 MBps) [2024-11-19T00:08:55.352Z] Copying: 912/1024 [MB] (12 MBps) [2024-11-19T00:08:56.296Z] Copying: 922/1024 [MB] (10 MBps) [2024-11-19T00:08:57.241Z] Copying: 933/1024 [MB] (10 MBps) [2024-11-19T00:08:58.182Z] Copying: 945/1024 [MB] (12 MBps) [2024-11-19T00:08:59.126Z] Copying: 955/1024 [MB] (10 MBps) [2024-11-19T00:09:00.515Z] Copying: 966/1024 [MB] (10 MBps) [2024-11-19T00:09:01.089Z] Copying: 980/1024 [MB] (14 MBps) [2024-11-19T00:09:02.476Z] Copying: 993/1024 [MB] (12 MBps) [2024-11-19T00:09:03.422Z] Copying: 1004/1024 [MB] (11 MBps) [2024-11-19T00:09:03.685Z] Copying: 1023/1024 [MB] (18 MBps) [2024-11-19T00:09:03.685Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 00:09:03.463106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.993 [2024-11-19 00:09:03.463336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:56.993 [2024-11-19 00:09:03.463362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:56.993 [2024-11-19 00:09:03.463372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.993 [2024-11-19 00:09:03.466076] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:56.993 [2024-11-19 00:09:03.472245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.993 [2024-11-19 00:09:03.472295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:56.993 [2024-11-19 00:09:03.472308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.097 ms 00:24:56.993 [2024-11-19 00:09:03.472317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.993 [2024-11-19 00:09:03.485544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.993 [2024-11-19 00:09:03.485724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:56.993 [2024-11-19 00:09:03.485746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.451 ms 00:24:56.993 [2024-11-19 00:09:03.485754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.993 [2024-11-19 00:09:03.508409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.993 [2024-11-19 00:09:03.508575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:56.993 [2024-11-19 00:09:03.508595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:24:56.993 [2024-11-19 00:09:03.508605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.993 [2024-11-19 00:09:03.514746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.993 [2024-11-19 00:09:03.514812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:56.993 [2024-11-19 00:09:03.514825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.087 ms 00:24:56.993 [2024-11-19 00:09:03.514834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.993 [2024-11-19 00:09:03.542015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.993 [2024-11-19 00:09:03.542069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:56.993 [2024-11-19 00:09:03.542083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.135 ms 00:24:56.993 [2024-11-19 00:09:03.542091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.993 [2024-11-19 00:09:03.558168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.993 [2024-11-19 00:09:03.558218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:56.993 [2024-11-19 00:09:03.558231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.004 ms 00:24:56.993 [2024-11-19 00:09:03.558239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.254 [2024-11-19 00:09:03.845034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.254 [2024-11-19 00:09:03.845099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:57.254 [2024-11-19 00:09:03.845116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 286.740 ms 00:24:57.254 [2024-11-19 00:09:03.845164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.254 [2024-11-19 00:09:03.872251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.254 [2024-11-19 00:09:03.872451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:57.254 [2024-11-19 00:09:03.872472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.066 ms 00:24:57.254 [2024-11-19 00:09:03.872481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.254 [2024-11-19 00:09:03.898492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.254 [2024-11-19 00:09:03.898542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:57.254 [2024-11-19 00:09:03.898555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.950 ms 00:24:57.254 [2024-11-19 00:09:03.898563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.254 [2024-11-19 00:09:03.923686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.254 [2024-11-19 00:09:03.923872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:57.254 [2024-11-19 00:09:03.923893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.076 ms 00:24:57.254 [2024-11-19 00:09:03.923902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.516 [2024-11-19 00:09:03.949253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.516 [2024-11-19 00:09:03.949301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:57.516 [2024-11-19 00:09:03.949313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.236 ms 00:24:57.516 [2024-11-19 00:09:03.949320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.516 [2024-11-19 00:09:03.949365] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:57.516 [2024-11-19 00:09:03.949380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103424 / 261120 wr_cnt: 1 state: open 00:24:57.516 [2024-11-19 00:09:03.949392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:57.516 [2024-11-19 00:09:03.949473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.949992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:57.517 [2024-11-19 00:09:03.950195] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:57.518 [2024-11-19 00:09:03.950204] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b3ee79cd-d3e4-452b-b6fc-dedb8c05032a 00:24:57.518 [2024-11-19 00:09:03.950212] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103424 00:24:57.518 [2024-11-19 00:09:03.950226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104384 00:24:57.518 [2024-11-19 00:09:03.950242] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103424 00:24:57.518 [2024-11-19 00:09:03.950251] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:24:57.518 [2024-11-19 00:09:03.950259] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:57.518 [2024-11-19 00:09:03.950267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:57.518 [2024-11-19 00:09:03.950275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:57.518 [2024-11-19 00:09:03.950282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:57.518 [2024-11-19 00:09:03.950289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:57.518 [2024-11-19 00:09:03.950297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.518 [2024-11-19 00:09:03.950305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:57.518 [2024-11-19 00:09:03.950343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:24:57.518 [2024-11-19 00:09:03.950351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:03.964031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.518 [2024-11-19 00:09:03.964077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:57.518 [2024-11-19 00:09:03.964089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.647 ms 00:24:57.518 [2024-11-19 00:09:03.964098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:03.964574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.518 [2024-11-19 00:09:03.964602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:57.518 [2024-11-19 00:09:03.964613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:24:57.518 [2024-11-19 00:09:03.964621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.001361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.001555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:57.518 [2024-11-19 00:09:04.001576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.001587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.001660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.001671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:57.518 [2024-11-19 00:09:04.001680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.001690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.001786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.001798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:57.518 [2024-11-19 00:09:04.001807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.001815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.001831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.001840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:57.518 [2024-11-19 00:09:04.001848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.001856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.086612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.086671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:57.518 [2024-11-19 00:09:04.086684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.086693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.155828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.155889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.518 [2024-11-19 00:09:04.155900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.155909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.155978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.155988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.518 [2024-11-19 00:09:04.155997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.156006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.156069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.156080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.518 [2024-11-19 00:09:04.156089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.156097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.156265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.156288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.518 [2024-11-19 00:09:04.156301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.156312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.156364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.156379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:57.518 [2024-11-19 00:09:04.156391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.156405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.156462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.156484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.518 [2024-11-19 00:09:04.156493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.156502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.156551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.518 [2024-11-19 00:09:04.156564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.518 [2024-11-19 00:09:04.156572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.518 [2024-11-19 00:09:04.156580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.518 [2024-11-19 00:09:04.156736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 694.104 ms, result 0 00:24:58.904 00:24:58.904 00:24:58.904 00:09:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:01.454 00:09:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:01.454 [2024-11-19 00:09:07.652262] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:01.454 [2024-11-19 00:09:07.652544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78928 ] 00:25:01.454 [2024-11-19 00:09:07.813174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.454 [2024-11-19 00:09:07.921659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.717 [2024-11-19 00:09:08.211848] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:01.717 [2024-11-19 00:09:08.211931] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:01.717 [2024-11-19 00:09:08.373424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.717 [2024-11-19 00:09:08.373483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:01.717 [2024-11-19 00:09:08.373503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:01.717 [2024-11-19 00:09:08.373513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.717 [2024-11-19 00:09:08.373567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.717 [2024-11-19 00:09:08.373578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:01.717 [2024-11-19 00:09:08.373590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:01.717 [2024-11-19 00:09:08.373597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.717 [2024-11-19 00:09:08.373618] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:01.717 [2024-11-19 00:09:08.374345] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:01.717 [2024-11-19 00:09:08.374365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.717 [2024-11-19 00:09:08.374374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:01.717 [2024-11-19 00:09:08.374383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:25:01.717 [2024-11-19 00:09:08.374391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.717 [2024-11-19 00:09:08.376718] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:01.717 [2024-11-19 00:09:08.392029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.717 [2024-11-19 00:09:08.392274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:01.717 [2024-11-19 00:09:08.392300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.315 ms 00:25:01.717 [2024-11-19 00:09:08.392310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.717 [2024-11-19 00:09:08.392387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.717 [2024-11-19 00:09:08.392398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:01.717 [2024-11-19 00:09:08.392407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:01.717 [2024-11-19 00:09:08.392414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.717 [2024-11-19 00:09:08.400833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.717 [2024-11-19 00:09:08.400880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:01.717 [2024-11-19 00:09:08.400891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.334 ms 00:25:01.717 [2024-11-19 00:09:08.400899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.717 [2024-11-19 00:09:08.400986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.717 [2024-11-19 00:09:08.400995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:01.717 [2024-11-19 00:09:08.401003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:01.717 [2024-11-19 00:09:08.401011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.717 [2024-11-19 00:09:08.401057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.717 [2024-11-19 00:09:08.401068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:01.717 [2024-11-19 00:09:08.401077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:01.717 [2024-11-19 00:09:08.401084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.717 [2024-11-19 00:09:08.401110] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:01.980 [2024-11-19 00:09:08.405300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.980 [2024-11-19 00:09:08.405339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:01.980 [2024-11-19 00:09:08.405350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.197 ms 00:25:01.980 [2024-11-19 00:09:08.405361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.980 [2024-11-19 00:09:08.405397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.980 [2024-11-19 00:09:08.405405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:01.980 [2024-11-19 00:09:08.405414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:01.980 [2024-11-19 00:09:08.405422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.980 [2024-11-19 00:09:08.405474] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:01.980 [2024-11-19 00:09:08.405499] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:01.980 [2024-11-19 00:09:08.405535] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:01.980 [2024-11-19 00:09:08.405555] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:01.980 [2024-11-19 00:09:08.405662] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:01.980 [2024-11-19 00:09:08.405673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:01.980 [2024-11-19 00:09:08.405686] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:01.980 [2024-11-19 00:09:08.405697] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:01.981 [2024-11-19 00:09:08.405706] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:01.981 [2024-11-19 00:09:08.405714] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:01.981 [2024-11-19 00:09:08.405722] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:01.981 [2024-11-19 00:09:08.405730] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:01.981 [2024-11-19 00:09:08.405737] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:01.981 [2024-11-19 00:09:08.405749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.981 [2024-11-19 00:09:08.405757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:01.981 [2024-11-19 00:09:08.405765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:01.981 [2024-11-19 00:09:08.405772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.981 [2024-11-19 00:09:08.405856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.981 [2024-11-19 00:09:08.405864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:01.981 [2024-11-19 00:09:08.405873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:01.981 [2024-11-19 00:09:08.405880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.981 [2024-11-19 00:09:08.405984] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:01.981 [2024-11-19 00:09:08.405997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:01.981 [2024-11-19 00:09:08.406005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:01.981 [2024-11-19 00:09:08.406013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:01.981 [2024-11-19 00:09:08.406028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:01.981 [2024-11-19 00:09:08.406044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:01.981 [2024-11-19 00:09:08.406052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:01.981 [2024-11-19 00:09:08.406066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:01.981 [2024-11-19 00:09:08.406073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:01.981 [2024-11-19 00:09:08.406079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:01.981 [2024-11-19 00:09:08.406086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:01.981 [2024-11-19 00:09:08.406095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:01.981 [2024-11-19 00:09:08.406108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:01.981 [2024-11-19 00:09:08.406146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:01.981 [2024-11-19 00:09:08.406154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:01.981 [2024-11-19 00:09:08.406169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.981 [2024-11-19 00:09:08.406184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:01.981 [2024-11-19 00:09:08.406191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.981 [2024-11-19 00:09:08.406204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:01.981 [2024-11-19 00:09:08.406211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.981 [2024-11-19 00:09:08.406223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:01.981 [2024-11-19 00:09:08.406232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.981 [2024-11-19 00:09:08.406246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:01.981 [2024-11-19 00:09:08.406253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:01.981 [2024-11-19 00:09:08.406267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:01.981 [2024-11-19 00:09:08.406275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:01.981 [2024-11-19 00:09:08.406282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:01.981 [2024-11-19 00:09:08.406289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:01.981 [2024-11-19 00:09:08.406296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:01.981 [2024-11-19 00:09:08.406304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:01.981 [2024-11-19 00:09:08.406318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:01.981 [2024-11-19 00:09:08.406325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406332] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:01.981 [2024-11-19 00:09:08.406340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:01.981 [2024-11-19 00:09:08.406348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:01.981 [2024-11-19 00:09:08.406358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.981 [2024-11-19 00:09:08.406366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:01.981 [2024-11-19 00:09:08.406373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:01.981 [2024-11-19 00:09:08.406380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:01.981 [2024-11-19 00:09:08.406387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:01.981 [2024-11-19 00:09:08.406393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:01.981 [2024-11-19 00:09:08.406401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:01.981 [2024-11-19 00:09:08.406409] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:01.981 [2024-11-19 00:09:08.406419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:01.981 [2024-11-19 00:09:08.406427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:01.981 [2024-11-19 00:09:08.406434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:01.981 [2024-11-19 00:09:08.406441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:01.981 [2024-11-19 00:09:08.406448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:01.981 [2024-11-19 00:09:08.406456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:01.981 [2024-11-19 00:09:08.406463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:01.981 [2024-11-19 00:09:08.406480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:01.981 [2024-11-19 00:09:08.406487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:01.981 [2024-11-19 00:09:08.406494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:01.981 [2024-11-19 00:09:08.406502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:01.981 [2024-11-19 00:09:08.406510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:01.981 [2024-11-19 00:09:08.406517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:01.981 [2024-11-19 00:09:08.406523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:01.981 [2024-11-19 00:09:08.406530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:01.981 [2024-11-19 00:09:08.406537] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:01.981 [2024-11-19 00:09:08.406549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:01.981 [2024-11-19 00:09:08.406558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:01.981 [2024-11-19 00:09:08.406565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:01.981 [2024-11-19 00:09:08.406573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:01.981 [2024-11-19 00:09:08.406580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:01.981 [2024-11-19 00:09:08.406588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.981 [2024-11-19 00:09:08.406597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:01.981 [2024-11-19 00:09:08.406604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:25:01.981 [2024-11-19 00:09:08.406614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.981 [2024-11-19 00:09:08.439085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.981 [2024-11-19 00:09:08.439146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:01.981 [2024-11-19 00:09:08.439159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.424 ms 00:25:01.981 [2024-11-19 00:09:08.439168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.981 [2024-11-19 00:09:08.439289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.439298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:01.982 [2024-11-19 00:09:08.439306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:01.982 [2024-11-19 00:09:08.439314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.487540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.487596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:01.982 [2024-11-19 00:09:08.487610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.167 ms 00:25:01.982 [2024-11-19 00:09:08.487619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.487670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.487680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:01.982 [2024-11-19 00:09:08.487690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:01.982 [2024-11-19 00:09:08.487701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.488320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.488345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:01.982 [2024-11-19 00:09:08.488358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:25:01.982 [2024-11-19 00:09:08.488370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.488575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.488596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:01.982 [2024-11-19 00:09:08.488611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:25:01.982 [2024-11-19 00:09:08.488641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.504635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.504681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:01.982 [2024-11-19 00:09:08.504695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.954 ms 00:25:01.982 [2024-11-19 00:09:08.504703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.519281] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:01.982 [2024-11-19 00:09:08.519330] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:01.982 [2024-11-19 00:09:08.519345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.519354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:01.982 [2024-11-19 00:09:08.519363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.505 ms 00:25:01.982 [2024-11-19 00:09:08.519371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.545863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.546062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:01.982 [2024-11-19 00:09:08.546084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.435 ms 00:25:01.982 [2024-11-19 00:09:08.546093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.558861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.558919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:01.982 [2024-11-19 00:09:08.558931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.717 ms 00:25:01.982 [2024-11-19 00:09:08.558939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.571613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.571661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:01.982 [2024-11-19 00:09:08.571673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.626 ms 00:25:01.982 [2024-11-19 00:09:08.571680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.572374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.572401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:01.982 [2024-11-19 00:09:08.572412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:25:01.982 [2024-11-19 00:09:08.572423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.638418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.638480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:01.982 [2024-11-19 00:09:08.638502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.974 ms 00:25:01.982 [2024-11-19 00:09:08.638512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.649813] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:01.982 [2024-11-19 00:09:08.652927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.652973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:01.982 [2024-11-19 00:09:08.652985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.359 ms 00:25:01.982 [2024-11-19 00:09:08.652994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.653081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.653093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:01.982 [2024-11-19 00:09:08.653102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:01.982 [2024-11-19 00:09:08.653113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.654895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.654943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:01.982 [2024-11-19 00:09:08.654955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:25:01.982 [2024-11-19 00:09:08.654963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.654999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.655009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:01.982 [2024-11-19 00:09:08.655018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:01.982 [2024-11-19 00:09:08.655026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.982 [2024-11-19 00:09:08.655066] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:01.982 [2024-11-19 00:09:08.655079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.982 [2024-11-19 00:09:08.655088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:01.982 [2024-11-19 00:09:08.655097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:01.982 [2024-11-19 00:09:08.655105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.245 [2024-11-19 00:09:08.680558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.245 [2024-11-19 00:09:08.680609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:02.245 [2024-11-19 00:09:08.680623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.418 ms 00:25:02.245 [2024-11-19 00:09:08.680638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.245 [2024-11-19 00:09:08.680726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.245 [2024-11-19 00:09:08.680738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:02.245 [2024-11-19 00:09:08.680759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:02.245 [2024-11-19 00:09:08.680768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.245 [2024-11-19 00:09:08.682167] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 308.208 ms, result 0 00:25:03.189  [2024-11-19T00:09:11.268Z] Copying: 1176/1048576 [kB] (1176 kBps) [2024-11-19T00:09:12.212Z] Copying: 4452/1048576 [kB] (3276 kBps) [2024-11-19T00:09:13.155Z] Copying: 15/1024 [MB] (10 MBps) [2024-11-19T00:09:14.153Z] Copying: 33/1024 [MB] (18 MBps) [2024-11-19T00:09:15.097Z] Copying: 60/1024 [MB] (27 MBps) [2024-11-19T00:09:16.040Z] Copying: 98/1024 [MB] (37 MBps) [2024-11-19T00:09:16.984Z] Copying: 126/1024 [MB] (28 MBps) [2024-11-19T00:09:17.929Z] Copying: 152/1024 [MB] (26 MBps) [2024-11-19T00:09:19.318Z] Copying: 177/1024 [MB] (24 MBps) [2024-11-19T00:09:19.889Z] Copying: 206/1024 [MB] (29 MBps) [2024-11-19T00:09:21.278Z] Copying: 240/1024 [MB] (33 MBps) [2024-11-19T00:09:22.223Z] Copying: 269/1024 [MB] (29 MBps) [2024-11-19T00:09:23.168Z] Copying: 292/1024 [MB] (22 MBps) [2024-11-19T00:09:24.112Z] Copying: 319/1024 [MB] (27 MBps) [2024-11-19T00:09:25.057Z] Copying: 347/1024 [MB] (27 MBps) [2024-11-19T00:09:26.002Z] Copying: 377/1024 [MB] (30 MBps) [2024-11-19T00:09:26.948Z] Copying: 405/1024 [MB] (27 MBps) [2024-11-19T00:09:27.892Z] Copying: 433/1024 [MB] (28 MBps) [2024-11-19T00:09:29.281Z] Copying: 462/1024 [MB] (28 MBps) [2024-11-19T00:09:30.224Z] Copying: 488/1024 [MB] (26 MBps) [2024-11-19T00:09:31.167Z] Copying: 515/1024 [MB] (26 MBps) [2024-11-19T00:09:32.111Z] Copying: 548/1024 [MB] (33 MBps) [2024-11-19T00:09:33.054Z] Copying: 578/1024 [MB] (29 MBps) [2024-11-19T00:09:33.997Z] Copying: 601/1024 [MB] (23 MBps) [2024-11-19T00:09:34.939Z] Copying: 627/1024 [MB] (26 MBps) [2024-11-19T00:09:35.882Z] Copying: 651/1024 [MB] (24 MBps) [2024-11-19T00:09:37.269Z] Copying: 679/1024 [MB] (27 MBps) [2024-11-19T00:09:38.212Z] Copying: 711/1024 [MB] (31 MBps) [2024-11-19T00:09:39.156Z] Copying: 738/1024 [MB] (27 MBps) [2024-11-19T00:09:40.100Z] Copying: 771/1024 [MB] (32 MBps) [2024-11-19T00:09:41.042Z] Copying: 802/1024 [MB] (31 MBps) [2024-11-19T00:09:41.987Z] Copying: 829/1024 [MB] (26 MBps) [2024-11-19T00:09:42.931Z] Copying: 855/1024 [MB] (26 MBps) [2024-11-19T00:09:43.875Z] Copying: 878/1024 [MB] (23 MBps) [2024-11-19T00:09:44.891Z] Copying: 906/1024 [MB] (28 MBps) [2024-11-19T00:09:46.279Z] Copying: 926/1024 [MB] (20 MBps) [2024-11-19T00:09:47.224Z] Copying: 954/1024 [MB] (27 MBps) [2024-11-19T00:09:48.168Z] Copying: 969/1024 [MB] (15 MBps) [2024-11-19T00:09:49.112Z] Copying: 994/1024 [MB] (24 MBps) [2024-11-19T00:09:49.112Z] Copying: 1020/1024 [MB] (25 MBps) [2024-11-19T00:09:49.687Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-19 00:09:49.394624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.394999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:42.995 [2024-11-19 00:09:49.395043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:42.995 [2024-11-19 00:09:49.395053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.395090] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:42.995 [2024-11-19 00:09:49.398380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.398436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:42.995 [2024-11-19 00:09:49.398449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.270 ms 00:25:42.995 [2024-11-19 00:09:49.398458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.398712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.398726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:42.995 [2024-11-19 00:09:49.398740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:25:42.995 [2024-11-19 00:09:49.398748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.415768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.415826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:42.995 [2024-11-19 00:09:49.415840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.000 ms 00:25:42.995 [2024-11-19 00:09:49.415850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.422691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.422739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:42.995 [2024-11-19 00:09:49.422751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.798 ms 00:25:42.995 [2024-11-19 00:09:49.422768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.450462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.450515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:42.995 [2024-11-19 00:09:49.450529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.644 ms 00:25:42.995 [2024-11-19 00:09:49.450537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.466949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.467003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:42.995 [2024-11-19 00:09:49.467018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.360 ms 00:25:42.995 [2024-11-19 00:09:49.467026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.471741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.471792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:42.995 [2024-11-19 00:09:49.471804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:25:42.995 [2024-11-19 00:09:49.471813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.498563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.498613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:42.995 [2024-11-19 00:09:49.498626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.724 ms 00:25:42.995 [2024-11-19 00:09:49.498634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.524971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.525018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:42.995 [2024-11-19 00:09:49.525070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.286 ms 00:25:42.995 [2024-11-19 00:09:49.525079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.550361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.550391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:42.995 [2024-11-19 00:09:49.550401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.217 ms 00:25:42.995 [2024-11-19 00:09:49.550407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.573521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.995 [2024-11-19 00:09:49.573551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:42.995 [2024-11-19 00:09:49.573562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.060 ms 00:25:42.995 [2024-11-19 00:09:49.573569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.995 [2024-11-19 00:09:49.573601] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:42.995 [2024-11-19 00:09:49.573615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:42.995 [2024-11-19 00:09:49.573625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:42.995 [2024-11-19 00:09:49.573633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:42.995 [2024-11-19 00:09:49.573944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.573951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.573959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.573966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.573973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.573981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.573988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.573995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:42.996 [2024-11-19 00:09:49.574408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:42.996 [2024-11-19 00:09:49.574416] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b3ee79cd-d3e4-452b-b6fc-dedb8c05032a 00:25:42.996 [2024-11-19 00:09:49.574424] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:42.996 [2024-11-19 00:09:49.574431] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161216 00:25:42.996 [2024-11-19 00:09:49.574438] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 159232 00:25:42.996 [2024-11-19 00:09:49.574456] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:25:42.996 [2024-11-19 00:09:49.574463] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:42.996 [2024-11-19 00:09:49.574471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:42.996 [2024-11-19 00:09:49.574479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:42.996 [2024-11-19 00:09:49.574490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:42.996 [2024-11-19 00:09:49.574497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:42.996 [2024-11-19 00:09:49.574503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.996 [2024-11-19 00:09:49.574511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:42.996 [2024-11-19 00:09:49.574519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:25:42.996 [2024-11-19 00:09:49.574526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.996 [2024-11-19 00:09:49.586931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.996 [2024-11-19 00:09:49.586964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:42.996 [2024-11-19 00:09:49.586974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.388 ms 00:25:42.996 [2024-11-19 00:09:49.586981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.996 [2024-11-19 00:09:49.587357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.996 [2024-11-19 00:09:49.587368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:42.996 [2024-11-19 00:09:49.587376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:25:42.996 [2024-11-19 00:09:49.587384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.996 [2024-11-19 00:09:49.620864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.996 [2024-11-19 00:09:49.621009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.996 [2024-11-19 00:09:49.621026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.996 [2024-11-19 00:09:49.621035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.996 [2024-11-19 00:09:49.621105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.996 [2024-11-19 00:09:49.621114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.996 [2024-11-19 00:09:49.621138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.996 [2024-11-19 00:09:49.621146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.996 [2024-11-19 00:09:49.621227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.996 [2024-11-19 00:09:49.621240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.996 [2024-11-19 00:09:49.621250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.996 [2024-11-19 00:09:49.621258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.996 [2024-11-19 00:09:49.621273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.996 [2024-11-19 00:09:49.621281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.996 [2024-11-19 00:09:49.621288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.996 [2024-11-19 00:09:49.621296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.257 [2024-11-19 00:09:49.699381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.257 [2024-11-19 00:09:49.699423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.257 [2024-11-19 00:09:49.699435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.257 [2024-11-19 00:09:49.699443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.257 [2024-11-19 00:09:49.764474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.257 [2024-11-19 00:09:49.764516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.257 [2024-11-19 00:09:49.764527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.257 [2024-11-19 00:09:49.764534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.257 [2024-11-19 00:09:49.764587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.257 [2024-11-19 00:09:49.764596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.257 [2024-11-19 00:09:49.764610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.257 [2024-11-19 00:09:49.764617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.257 [2024-11-19 00:09:49.764664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.257 [2024-11-19 00:09:49.764673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.257 [2024-11-19 00:09:49.764682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.257 [2024-11-19 00:09:49.764690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.257 [2024-11-19 00:09:49.764775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.257 [2024-11-19 00:09:49.764785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.257 [2024-11-19 00:09:49.764793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.257 [2024-11-19 00:09:49.764804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.257 [2024-11-19 00:09:49.764833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.257 [2024-11-19 00:09:49.764841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:43.257 [2024-11-19 00:09:49.764849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.257 [2024-11-19 00:09:49.764856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.257 [2024-11-19 00:09:49.764893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.257 [2024-11-19 00:09:49.764903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.257 [2024-11-19 00:09:49.764911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.257 [2024-11-19 00:09:49.764921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.257 [2024-11-19 00:09:49.764961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.257 [2024-11-19 00:09:49.764970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.257 [2024-11-19 00:09:49.764978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.257 [2024-11-19 00:09:49.764986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.257 [2024-11-19 00:09:49.765145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.475 ms, result 0 00:25:43.840 00:25:43.840 00:25:43.841 00:09:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:45.763 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:45.763 00:09:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:45.763 [2024-11-19 00:09:52.446457] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:45.763 [2024-11-19 00:09:52.446549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79387 ] 00:25:46.024 [2024-11-19 00:09:52.603057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.024 [2024-11-19 00:09:52.709266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.598 [2024-11-19 00:09:52.998284] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.598 [2024-11-19 00:09:52.998368] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.598 [2024-11-19 00:09:53.160604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.160665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:46.598 [2024-11-19 00:09:53.160686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:46.598 [2024-11-19 00:09:53.160695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.598 [2024-11-19 00:09:53.160753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.160764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.598 [2024-11-19 00:09:53.160776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:46.598 [2024-11-19 00:09:53.160784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.598 [2024-11-19 00:09:53.160805] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:46.598 [2024-11-19 00:09:53.161583] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:46.598 [2024-11-19 00:09:53.161619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.161628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.598 [2024-11-19 00:09:53.161639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:25:46.598 [2024-11-19 00:09:53.161647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.598 [2024-11-19 00:09:53.163417] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:46.598 [2024-11-19 00:09:53.177857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.177912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:46.598 [2024-11-19 00:09:53.177927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.443 ms 00:25:46.598 [2024-11-19 00:09:53.177936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.598 [2024-11-19 00:09:53.178022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.178038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:46.598 [2024-11-19 00:09:53.178047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:46.598 [2024-11-19 00:09:53.178055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.598 [2024-11-19 00:09:53.186473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.186519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.598 [2024-11-19 00:09:53.186531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.322 ms 00:25:46.598 [2024-11-19 00:09:53.186540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.598 [2024-11-19 00:09:53.186634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.186645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.598 [2024-11-19 00:09:53.186656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:46.598 [2024-11-19 00:09:53.186664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.598 [2024-11-19 00:09:53.186709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.186719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:46.598 [2024-11-19 00:09:53.186728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:46.598 [2024-11-19 00:09:53.186736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.598 [2024-11-19 00:09:53.186759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:46.598 [2024-11-19 00:09:53.190966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.191010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.598 [2024-11-19 00:09:53.191021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.212 ms 00:25:46.598 [2024-11-19 00:09:53.191032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.598 [2024-11-19 00:09:53.191072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.598 [2024-11-19 00:09:53.191081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:46.598 [2024-11-19 00:09:53.191090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:46.599 [2024-11-19 00:09:53.191098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.599 [2024-11-19 00:09:53.191172] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:46.599 [2024-11-19 00:09:53.191197] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:46.599 [2024-11-19 00:09:53.191235] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:46.599 [2024-11-19 00:09:53.191255] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:46.599 [2024-11-19 00:09:53.191361] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:46.599 [2024-11-19 00:09:53.191375] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:46.599 [2024-11-19 00:09:53.191387] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:46.599 [2024-11-19 00:09:53.191401] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:46.599 [2024-11-19 00:09:53.191413] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:46.599 [2024-11-19 00:09:53.191422] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:46.599 [2024-11-19 00:09:53.191429] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:46.599 [2024-11-19 00:09:53.191438] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:46.599 [2024-11-19 00:09:53.191446] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:46.599 [2024-11-19 00:09:53.191458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.599 [2024-11-19 00:09:53.191467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:46.599 [2024-11-19 00:09:53.191475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:46.599 [2024-11-19 00:09:53.191482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.599 [2024-11-19 00:09:53.191570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.599 [2024-11-19 00:09:53.191579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:46.599 [2024-11-19 00:09:53.191586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:46.599 [2024-11-19 00:09:53.191594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.599 [2024-11-19 00:09:53.191699] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:46.599 [2024-11-19 00:09:53.191712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:46.599 [2024-11-19 00:09:53.191724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.599 [2024-11-19 00:09:53.191732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:46.599 [2024-11-19 00:09:53.191749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:46.599 [2024-11-19 00:09:53.191764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:46.599 [2024-11-19 00:09:53.191772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.599 [2024-11-19 00:09:53.191785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:46.599 [2024-11-19 00:09:53.191792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:46.599 [2024-11-19 00:09:53.191799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.599 [2024-11-19 00:09:53.191807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:46.599 [2024-11-19 00:09:53.191814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:46.599 [2024-11-19 00:09:53.191827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:46.599 [2024-11-19 00:09:53.191841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:46.599 [2024-11-19 00:09:53.191847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:46.599 [2024-11-19 00:09:53.191862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.599 [2024-11-19 00:09:53.191876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:46.599 [2024-11-19 00:09:53.191883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.599 [2024-11-19 00:09:53.191898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:46.599 [2024-11-19 00:09:53.191910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.599 [2024-11-19 00:09:53.191927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:46.599 [2024-11-19 00:09:53.191935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.599 [2024-11-19 00:09:53.191952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:46.599 [2024-11-19 00:09:53.191960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:46.599 [2024-11-19 00:09:53.191968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.599 [2024-11-19 00:09:53.191975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:46.599 [2024-11-19 00:09:53.191984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:46.599 [2024-11-19 00:09:53.191991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.599 [2024-11-19 00:09:53.191998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:46.599 [2024-11-19 00:09:53.192005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:46.599 [2024-11-19 00:09:53.192012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.599 [2024-11-19 00:09:53.192020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:46.599 [2024-11-19 00:09:53.192027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:46.599 [2024-11-19 00:09:53.192040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.599 [2024-11-19 00:09:53.192048] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:46.599 [2024-11-19 00:09:53.192055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:46.599 [2024-11-19 00:09:53.192063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.599 [2024-11-19 00:09:53.192071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.599 [2024-11-19 00:09:53.192078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:46.599 [2024-11-19 00:09:53.192085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:46.599 [2024-11-19 00:09:53.192092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:46.599 [2024-11-19 00:09:53.192098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:46.599 [2024-11-19 00:09:53.192105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:46.599 [2024-11-19 00:09:53.192111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:46.599 [2024-11-19 00:09:53.192136] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:46.599 [2024-11-19 00:09:53.192147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.599 [2024-11-19 00:09:53.192155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:46.599 [2024-11-19 00:09:53.192163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:46.599 [2024-11-19 00:09:53.192171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:46.600 [2024-11-19 00:09:53.192182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:46.600 [2024-11-19 00:09:53.192192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:46.600 [2024-11-19 00:09:53.192200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:46.600 [2024-11-19 00:09:53.192209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:46.600 [2024-11-19 00:09:53.192218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:46.600 [2024-11-19 00:09:53.192229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:46.600 [2024-11-19 00:09:53.192237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:46.600 [2024-11-19 00:09:53.192248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:46.600 [2024-11-19 00:09:53.192257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:46.600 [2024-11-19 00:09:53.192265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:46.600 [2024-11-19 00:09:53.192273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:46.600 [2024-11-19 00:09:53.192280] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:46.600 [2024-11-19 00:09:53.192293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.600 [2024-11-19 00:09:53.192302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:46.600 [2024-11-19 00:09:53.192309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:46.600 [2024-11-19 00:09:53.192317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:46.600 [2024-11-19 00:09:53.192325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:46.600 [2024-11-19 00:09:53.192332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.600 [2024-11-19 00:09:53.192340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:46.600 [2024-11-19 00:09:53.192348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:25:46.600 [2024-11-19 00:09:53.192355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.600 [2024-11-19 00:09:53.225041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.600 [2024-11-19 00:09:53.225110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.600 [2024-11-19 00:09:53.225142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.638 ms 00:25:46.600 [2024-11-19 00:09:53.225152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.600 [2024-11-19 00:09:53.225252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.600 [2024-11-19 00:09:53.225261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:46.600 [2024-11-19 00:09:53.225271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:46.600 [2024-11-19 00:09:53.225307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.600 [2024-11-19 00:09:53.273106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.600 [2024-11-19 00:09:53.273175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.600 [2024-11-19 00:09:53.273189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.738 ms 00:25:46.600 [2024-11-19 00:09:53.273198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.600 [2024-11-19 00:09:53.273249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.600 [2024-11-19 00:09:53.273260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.600 [2024-11-19 00:09:53.273270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:46.600 [2024-11-19 00:09:53.273283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.600 [2024-11-19 00:09:53.273855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.600 [2024-11-19 00:09:53.273904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.600 [2024-11-19 00:09:53.273916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:25:46.600 [2024-11-19 00:09:53.273925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.600 [2024-11-19 00:09:53.274088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.600 [2024-11-19 00:09:53.274099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.600 [2024-11-19 00:09:53.274108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:25:46.600 [2024-11-19 00:09:53.274146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.290049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.290101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.860 [2024-11-19 00:09:53.290116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.880 ms 00:25:46.860 [2024-11-19 00:09:53.290158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.304657] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:46.860 [2024-11-19 00:09:53.304711] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:46.860 [2024-11-19 00:09:53.304727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.304735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:46.860 [2024-11-19 00:09:53.304747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.456 ms 00:25:46.860 [2024-11-19 00:09:53.304755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.330801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.330875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:46.860 [2024-11-19 00:09:53.330887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.987 ms 00:25:46.860 [2024-11-19 00:09:53.330896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.344168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.344217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:46.860 [2024-11-19 00:09:53.344229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.212 ms 00:25:46.860 [2024-11-19 00:09:53.344236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.357110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.357166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:46.860 [2024-11-19 00:09:53.357179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.824 ms 00:25:46.860 [2024-11-19 00:09:53.357187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.357851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.357877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:46.860 [2024-11-19 00:09:53.357889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:25:46.860 [2024-11-19 00:09:53.357899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.424740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.424804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:46.860 [2024-11-19 00:09:53.424827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.821 ms 00:25:46.860 [2024-11-19 00:09:53.424836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.436615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:46.860 [2024-11-19 00:09:53.439907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.440095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:46.860 [2024-11-19 00:09:53.440116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.012 ms 00:25:46.860 [2024-11-19 00:09:53.440145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.440240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.440253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:46.860 [2024-11-19 00:09:53.440263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:46.860 [2024-11-19 00:09:53.440275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.441143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.441181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:46.860 [2024-11-19 00:09:53.441193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:25:46.860 [2024-11-19 00:09:53.441202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.441232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.441242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:46.860 [2024-11-19 00:09:53.441252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:46.860 [2024-11-19 00:09:53.441261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.441304] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:46.860 [2024-11-19 00:09:53.441318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.441327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:46.860 [2024-11-19 00:09:53.441336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:46.860 [2024-11-19 00:09:53.441345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.860 [2024-11-19 00:09:53.466769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.860 [2024-11-19 00:09:53.466965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:46.860 [2024-11-19 00:09:53.466987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.402 ms 00:25:46.861 [2024-11-19 00:09:53.467004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.861 [2024-11-19 00:09:53.467084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.861 [2024-11-19 00:09:53.467094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:46.861 [2024-11-19 00:09:53.467104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:46.861 [2024-11-19 00:09:53.467111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.861 [2024-11-19 00:09:53.468581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.459 ms, result 0 00:25:48.250  [2024-11-19T00:09:55.887Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-19T00:09:56.831Z] Copying: 40/1024 [MB] (17 MBps) [2024-11-19T00:09:57.775Z] Copying: 58/1024 [MB] (18 MBps) [2024-11-19T00:09:58.718Z] Copying: 73/1024 [MB] (15 MBps) [2024-11-19T00:09:59.664Z] Copying: 91/1024 [MB] (18 MBps) [2024-11-19T00:10:01.052Z] Copying: 105/1024 [MB] (14 MBps) [2024-11-19T00:10:01.995Z] Copying: 121/1024 [MB] (15 MBps) [2024-11-19T00:10:02.941Z] Copying: 142/1024 [MB] (20 MBps) [2024-11-19T00:10:03.886Z] Copying: 156/1024 [MB] (13 MBps) [2024-11-19T00:10:04.831Z] Copying: 173/1024 [MB] (17 MBps) [2024-11-19T00:10:05.778Z] Copying: 184/1024 [MB] (11 MBps) [2024-11-19T00:10:06.723Z] Copying: 195/1024 [MB] (10 MBps) [2024-11-19T00:10:07.667Z] Copying: 206/1024 [MB] (10 MBps) [2024-11-19T00:10:09.056Z] Copying: 216/1024 [MB] (10 MBps) [2024-11-19T00:10:10.006Z] Copying: 227/1024 [MB] (10 MBps) [2024-11-19T00:10:10.949Z] Copying: 237/1024 [MB] (10 MBps) [2024-11-19T00:10:11.893Z] Copying: 248/1024 [MB] (10 MBps) [2024-11-19T00:10:12.838Z] Copying: 271/1024 [MB] (23 MBps) [2024-11-19T00:10:13.783Z] Copying: 289/1024 [MB] (17 MBps) [2024-11-19T00:10:14.726Z] Copying: 303/1024 [MB] (14 MBps) [2024-11-19T00:10:15.671Z] Copying: 320/1024 [MB] (17 MBps) [2024-11-19T00:10:16.683Z] Copying: 334/1024 [MB] (13 MBps) [2024-11-19T00:10:18.073Z] Copying: 355/1024 [MB] (20 MBps) [2024-11-19T00:10:19.018Z] Copying: 372/1024 [MB] (17 MBps) [2024-11-19T00:10:19.964Z] Copying: 393/1024 [MB] (20 MBps) [2024-11-19T00:10:20.909Z] Copying: 411/1024 [MB] (18 MBps) [2024-11-19T00:10:21.856Z] Copying: 432/1024 [MB] (20 MBps) [2024-11-19T00:10:22.802Z] Copying: 450/1024 [MB] (18 MBps) [2024-11-19T00:10:23.746Z] Copying: 469/1024 [MB] (19 MBps) [2024-11-19T00:10:24.692Z] Copying: 485/1024 [MB] (16 MBps) [2024-11-19T00:10:26.081Z] Copying: 507/1024 [MB] (21 MBps) [2024-11-19T00:10:26.653Z] Copying: 519/1024 [MB] (12 MBps) [2024-11-19T00:10:28.042Z] Copying: 534/1024 [MB] (14 MBps) [2024-11-19T00:10:28.984Z] Copying: 557/1024 [MB] (22 MBps) [2024-11-19T00:10:29.927Z] Copying: 573/1024 [MB] (16 MBps) [2024-11-19T00:10:30.871Z] Copying: 589/1024 [MB] (16 MBps) [2024-11-19T00:10:31.815Z] Copying: 608/1024 [MB] (18 MBps) [2024-11-19T00:10:32.760Z] Copying: 626/1024 [MB] (18 MBps) [2024-11-19T00:10:33.702Z] Copying: 645/1024 [MB] (19 MBps) [2024-11-19T00:10:35.091Z] Copying: 664/1024 [MB] (18 MBps) [2024-11-19T00:10:35.664Z] Copying: 675/1024 [MB] (11 MBps) [2024-11-19T00:10:37.052Z] Copying: 689/1024 [MB] (13 MBps) [2024-11-19T00:10:37.996Z] Copying: 701/1024 [MB] (12 MBps) [2024-11-19T00:10:38.941Z] Copying: 714/1024 [MB] (13 MBps) [2024-11-19T00:10:39.883Z] Copying: 729/1024 [MB] (14 MBps) [2024-11-19T00:10:40.824Z] Copying: 750/1024 [MB] (20 MBps) [2024-11-19T00:10:41.766Z] Copying: 766/1024 [MB] (16 MBps) [2024-11-19T00:10:42.710Z] Copying: 788/1024 [MB] (21 MBps) [2024-11-19T00:10:43.654Z] Copying: 804/1024 [MB] (16 MBps) [2024-11-19T00:10:45.042Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-19T00:10:45.982Z] Copying: 828/1024 [MB] (12 MBps) [2024-11-19T00:10:46.927Z] Copying: 840/1024 [MB] (12 MBps) [2024-11-19T00:10:47.867Z] Copying: 851/1024 [MB] (10 MBps) [2024-11-19T00:10:48.913Z] Copying: 863/1024 [MB] (12 MBps) [2024-11-19T00:10:49.858Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-19T00:10:50.803Z] Copying: 888/1024 [MB] (13 MBps) [2024-11-19T00:10:51.746Z] Copying: 900/1024 [MB] (12 MBps) [2024-11-19T00:10:52.684Z] Copying: 910/1024 [MB] (10 MBps) [2024-11-19T00:10:54.069Z] Copying: 922/1024 [MB] (11 MBps) [2024-11-19T00:10:55.012Z] Copying: 932/1024 [MB] (10 MBps) [2024-11-19T00:10:55.955Z] Copying: 943/1024 [MB] (10 MBps) [2024-11-19T00:10:56.895Z] Copying: 953/1024 [MB] (10 MBps) [2024-11-19T00:10:57.838Z] Copying: 964/1024 [MB] (10 MBps) [2024-11-19T00:10:58.781Z] Copying: 978/1024 [MB] (14 MBps) [2024-11-19T00:10:59.729Z] Copying: 995/1024 [MB] (16 MBps) [2024-11-19T00:11:00.673Z] Copying: 1005/1024 [MB] (10 MBps) [2024-11-19T00:11:00.937Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 00:11:00.682985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.683090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:54.245 [2024-11-19 00:11:00.683113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:54.245 [2024-11-19 00:11:00.683155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.683190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:54.245 [2024-11-19 00:11:00.687623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.687892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:54.245 [2024-11-19 00:11:00.687934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.409 ms 00:26:54.245 [2024-11-19 00:11:00.687947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.688327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.688353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:54.245 [2024-11-19 00:11:00.688368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:26:54.245 [2024-11-19 00:11:00.688381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.693235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.693275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:54.245 [2024-11-19 00:11:00.693286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:26:54.245 [2024-11-19 00:11:00.693295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.699488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.699668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:54.245 [2024-11-19 00:11:00.699688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.162 ms 00:26:54.245 [2024-11-19 00:11:00.699697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.726995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.727235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:54.245 [2024-11-19 00:11:00.727258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.222 ms 00:26:54.245 [2024-11-19 00:11:00.727267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.743282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.743331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:54.245 [2024-11-19 00:11:00.743345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.973 ms 00:26:54.245 [2024-11-19 00:11:00.743353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.747768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.747825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:54.245 [2024-11-19 00:11:00.747837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:26:54.245 [2024-11-19 00:11:00.747845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.773933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.773982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:54.245 [2024-11-19 00:11:00.773994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.071 ms 00:26:54.245 [2024-11-19 00:11:00.774001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.799924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.799983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:54.245 [2024-11-19 00:11:00.799997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:26:54.245 [2024-11-19 00:11:00.800004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.825129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.825173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:54.245 [2024-11-19 00:11:00.825186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.069 ms 00:26:54.245 [2024-11-19 00:11:00.825193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.850428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.245 [2024-11-19 00:11:00.850473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:54.245 [2024-11-19 00:11:00.850486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.159 ms 00:26:54.245 [2024-11-19 00:11:00.850493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.245 [2024-11-19 00:11:00.850539] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:54.245 [2024-11-19 00:11:00.850554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:54.245 [2024-11-19 00:11:00.850573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:54.245 [2024-11-19 00:11:00.850582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:54.245 [2024-11-19 00:11:00.850662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.850994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:54.246 [2024-11-19 00:11:00.851382] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:54.246 [2024-11-19 00:11:00.851395] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b3ee79cd-d3e4-452b-b6fc-dedb8c05032a 00:26:54.247 [2024-11-19 00:11:00.851403] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:54.247 [2024-11-19 00:11:00.851421] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:54.247 [2024-11-19 00:11:00.851429] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:54.247 [2024-11-19 00:11:00.851437] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:54.247 [2024-11-19 00:11:00.851446] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:54.247 [2024-11-19 00:11:00.851454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:54.247 [2024-11-19 00:11:00.851470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:54.247 [2024-11-19 00:11:00.851478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:54.247 [2024-11-19 00:11:00.851484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:54.247 [2024-11-19 00:11:00.851492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.247 [2024-11-19 00:11:00.851500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:54.247 [2024-11-19 00:11:00.851510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:26:54.247 [2024-11-19 00:11:00.851518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.247 [2024-11-19 00:11:00.865155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.247 [2024-11-19 00:11:00.865199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:54.247 [2024-11-19 00:11:00.865212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.614 ms 00:26:54.247 [2024-11-19 00:11:00.865220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.247 [2024-11-19 00:11:00.865646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.247 [2024-11-19 00:11:00.865658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:54.247 [2024-11-19 00:11:00.865675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:26:54.247 [2024-11-19 00:11:00.865683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.247 [2024-11-19 00:11:00.902218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.247 [2024-11-19 00:11:00.902269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:54.247 [2024-11-19 00:11:00.902283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.247 [2024-11-19 00:11:00.902292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.247 [2024-11-19 00:11:00.902361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.247 [2024-11-19 00:11:00.902370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:54.247 [2024-11-19 00:11:00.902385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.247 [2024-11-19 00:11:00.902395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.247 [2024-11-19 00:11:00.902483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.247 [2024-11-19 00:11:00.902494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:54.247 [2024-11-19 00:11:00.902504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.247 [2024-11-19 00:11:00.902513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.247 [2024-11-19 00:11:00.902530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.247 [2024-11-19 00:11:00.902540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:54.247 [2024-11-19 00:11:00.902548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.247 [2024-11-19 00:11:00.902560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.509 [2024-11-19 00:11:00.987168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.509 [2024-11-19 00:11:00.987227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:54.509 [2024-11-19 00:11:00.987241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.509 [2024-11-19 00:11:00.987250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.509 [2024-11-19 00:11:01.056192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.509 [2024-11-19 00:11:01.056404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:54.509 [2024-11-19 00:11:01.056423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.509 [2024-11-19 00:11:01.056441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.509 [2024-11-19 00:11:01.056506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.509 [2024-11-19 00:11:01.056516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:54.509 [2024-11-19 00:11:01.056526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.509 [2024-11-19 00:11:01.056535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.510 [2024-11-19 00:11:01.056594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.510 [2024-11-19 00:11:01.056604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:54.510 [2024-11-19 00:11:01.056613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.510 [2024-11-19 00:11:01.056622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.510 [2024-11-19 00:11:01.056738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.510 [2024-11-19 00:11:01.056748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:54.510 [2024-11-19 00:11:01.056757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.510 [2024-11-19 00:11:01.056766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.510 [2024-11-19 00:11:01.056798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.510 [2024-11-19 00:11:01.056809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:54.510 [2024-11-19 00:11:01.056818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.510 [2024-11-19 00:11:01.056826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.510 [2024-11-19 00:11:01.056873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.510 [2024-11-19 00:11:01.056884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:54.510 [2024-11-19 00:11:01.056893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.510 [2024-11-19 00:11:01.056902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.510 [2024-11-19 00:11:01.056950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.510 [2024-11-19 00:11:01.056960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:54.510 [2024-11-19 00:11:01.056969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.510 [2024-11-19 00:11:01.056977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.510 [2024-11-19 00:11:01.057119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.106 ms, result 0 00:26:55.454 00:26:55.454 00:26:55.454 00:11:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:57.366 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:57.366 00:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:57.366 00:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:57.366 00:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:57.366 00:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:57.366 00:11:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:57.627 00:11:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:57.627 00:11:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:57.627 Process with pid 77382 is not found 00:26:57.627 00:11:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77382 00:26:57.627 00:11:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77382 ']' 00:26:57.627 00:11:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 77382 00:26:57.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77382) - No such process 00:26:57.627 00:11:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 77382 is not found' 00:26:57.627 00:11:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:57.888 Remove shared memory files 00:26:57.888 00:11:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:57.888 00:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:57.888 00:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:57.888 00:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:57.888 00:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:57.888 00:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:57.888 00:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:57.888 ************************************ 00:26:57.888 END TEST ftl_dirty_shutdown 00:26:57.888 ************************************ 00:26:57.888 00:26:57.888 real 4m20.210s 00:26:57.888 user 4m47.996s 00:26:57.888 sys 0m28.023s 00:26:57.888 00:11:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.888 00:11:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:57.888 00:11:04 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:57.888 00:11:04 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:57.888 00:11:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.888 00:11:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:57.888 ************************************ 00:26:57.888 START TEST ftl_upgrade_shutdown 00:26:57.888 ************************************ 00:26:57.888 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:58.150 * Looking for test storage... 00:26:58.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:58.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.150 --rc genhtml_branch_coverage=1 00:26:58.150 --rc genhtml_function_coverage=1 00:26:58.150 --rc genhtml_legend=1 00:26:58.150 --rc geninfo_all_blocks=1 00:26:58.150 --rc geninfo_unexecuted_blocks=1 00:26:58.150 00:26:58.150 ' 00:26:58.150 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:58.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.150 --rc genhtml_branch_coverage=1 00:26:58.150 --rc genhtml_function_coverage=1 00:26:58.150 --rc genhtml_legend=1 00:26:58.151 --rc geninfo_all_blocks=1 00:26:58.151 --rc geninfo_unexecuted_blocks=1 00:26:58.151 00:26:58.151 ' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:58.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.151 --rc genhtml_branch_coverage=1 00:26:58.151 --rc genhtml_function_coverage=1 00:26:58.151 --rc genhtml_legend=1 00:26:58.151 --rc geninfo_all_blocks=1 00:26:58.151 --rc geninfo_unexecuted_blocks=1 00:26:58.151 00:26:58.151 ' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:58.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.151 --rc genhtml_branch_coverage=1 00:26:58.151 --rc genhtml_function_coverage=1 00:26:58.151 --rc genhtml_legend=1 00:26:58.151 --rc geninfo_all_blocks=1 00:26:58.151 --rc geninfo_unexecuted_blocks=1 00:26:58.151 00:26:58.151 ' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80187 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80187 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80187 ']' 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.151 00:11:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:58.151 [2024-11-19 00:11:04.789113] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:58.151 [2024-11-19 00:11:04.789274] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80187 ] 00:26:58.412 [2024-11-19 00:11:04.951575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.412 [2024-11-19 00:11:05.075171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:59.357 00:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:59.616 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:59.616 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:59.616 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:59.616 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:26:59.616 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:59.616 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:59.616 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:59.616 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:59.616 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:59.616 { 00:26:59.616 "name": "basen1", 00:26:59.616 "aliases": [ 00:26:59.616 "81aba5c8-5548-4632-98ba-9af7ce611871" 00:26:59.616 ], 00:26:59.616 "product_name": "NVMe disk", 00:26:59.616 "block_size": 4096, 00:26:59.616 "num_blocks": 1310720, 00:26:59.616 "uuid": "81aba5c8-5548-4632-98ba-9af7ce611871", 00:26:59.616 "numa_id": -1, 00:26:59.616 "assigned_rate_limits": { 00:26:59.616 "rw_ios_per_sec": 0, 00:26:59.616 "rw_mbytes_per_sec": 0, 00:26:59.616 "r_mbytes_per_sec": 0, 00:26:59.616 "w_mbytes_per_sec": 0 00:26:59.616 }, 00:26:59.616 "claimed": true, 00:26:59.616 "claim_type": "read_many_write_one", 00:26:59.616 "zoned": false, 00:26:59.616 "supported_io_types": { 00:26:59.616 "read": true, 00:26:59.616 "write": true, 00:26:59.616 "unmap": true, 00:26:59.616 "flush": true, 00:26:59.616 "reset": true, 00:26:59.616 "nvme_admin": true, 00:26:59.616 "nvme_io": true, 00:26:59.616 "nvme_io_md": false, 00:26:59.616 "write_zeroes": true, 00:26:59.616 "zcopy": false, 00:26:59.616 "get_zone_info": false, 00:26:59.616 "zone_management": false, 00:26:59.616 "zone_append": false, 00:26:59.616 "compare": true, 00:26:59.616 "compare_and_write": false, 00:26:59.616 "abort": true, 00:26:59.616 "seek_hole": false, 00:26:59.616 "seek_data": false, 00:26:59.616 "copy": true, 00:26:59.616 "nvme_iov_md": false 00:26:59.616 }, 00:26:59.616 "driver_specific": { 00:26:59.616 "nvme": [ 00:26:59.616 { 00:26:59.616 "pci_address": "0000:00:11.0", 00:26:59.616 "trid": { 00:26:59.616 "trtype": "PCIe", 00:26:59.616 "traddr": "0000:00:11.0" 00:26:59.616 }, 00:26:59.616 "ctrlr_data": { 00:26:59.616 "cntlid": 0, 00:26:59.616 "vendor_id": "0x1b36", 00:26:59.616 "model_number": "QEMU NVMe Ctrl", 00:26:59.616 "serial_number": "12341", 00:26:59.616 "firmware_revision": "8.0.0", 00:26:59.616 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:59.617 "oacs": { 00:26:59.617 "security": 0, 00:26:59.617 "format": 1, 00:26:59.617 "firmware": 0, 00:26:59.617 "ns_manage": 1 00:26:59.617 }, 00:26:59.617 "multi_ctrlr": false, 00:26:59.617 "ana_reporting": false 00:26:59.617 }, 00:26:59.617 "vs": { 00:26:59.617 "nvme_version": "1.4" 00:26:59.617 }, 00:26:59.617 "ns_data": { 00:26:59.617 "id": 1, 00:26:59.617 "can_share": false 00:26:59.617 } 00:26:59.617 } 00:26:59.617 ], 00:26:59.617 "mp_policy": "active_passive" 00:26:59.617 } 00:26:59.617 } 00:26:59.617 ]' 00:26:59.617 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a6c9732a-38e9-401f-9a5a-013e5a5549db 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:59.877 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6c9732a-38e9-401f-9a5a-013e5a5549db 00:27:00.137 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:00.398 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=29e8e9e7-ffa9-4da1-9853-891d98ec08e1 00:27:00.398 00:11:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 29e8e9e7-ffa9-4da1-9853-891d98ec08e1 00:27:00.659 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=ae668d4d-8e6d-4783-8e73-2b0a7f7d399f 00:27:00.659 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z ae668d4d-8e6d-4783-8e73-2b0a7f7d399f ]] 00:27:00.659 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 ae668d4d-8e6d-4783-8e73-2b0a7f7d399f 5120 00:27:00.659 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:00.659 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:00.659 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=ae668d4d-8e6d-4783-8e73-2b0a7f7d399f 00:27:00.659 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:00.659 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size ae668d4d-8e6d-4783-8e73-2b0a7f7d399f 00:27:00.660 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ae668d4d-8e6d-4783-8e73-2b0a7f7d399f 00:27:00.660 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:00.660 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:00.660 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:00.660 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae668d4d-8e6d-4783-8e73-2b0a7f7d399f 00:27:00.920 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:00.920 { 00:27:00.920 "name": "ae668d4d-8e6d-4783-8e73-2b0a7f7d399f", 00:27:00.920 "aliases": [ 00:27:00.920 "lvs/basen1p0" 00:27:00.920 ], 00:27:00.920 "product_name": "Logical Volume", 00:27:00.920 "block_size": 4096, 00:27:00.920 "num_blocks": 5242880, 00:27:00.920 "uuid": "ae668d4d-8e6d-4783-8e73-2b0a7f7d399f", 00:27:00.920 "assigned_rate_limits": { 00:27:00.920 "rw_ios_per_sec": 0, 00:27:00.920 "rw_mbytes_per_sec": 0, 00:27:00.920 "r_mbytes_per_sec": 0, 00:27:00.920 "w_mbytes_per_sec": 0 00:27:00.920 }, 00:27:00.920 "claimed": false, 00:27:00.920 "zoned": false, 00:27:00.920 "supported_io_types": { 00:27:00.920 "read": true, 00:27:00.920 "write": true, 00:27:00.920 "unmap": true, 00:27:00.920 "flush": false, 00:27:00.920 "reset": true, 00:27:00.920 "nvme_admin": false, 00:27:00.920 "nvme_io": false, 00:27:00.920 "nvme_io_md": false, 00:27:00.920 "write_zeroes": true, 00:27:00.920 "zcopy": false, 00:27:00.920 "get_zone_info": false, 00:27:00.920 "zone_management": false, 00:27:00.921 "zone_append": false, 00:27:00.921 "compare": false, 00:27:00.921 "compare_and_write": false, 00:27:00.921 "abort": false, 00:27:00.921 "seek_hole": true, 00:27:00.921 "seek_data": true, 00:27:00.921 "copy": false, 00:27:00.921 "nvme_iov_md": false 00:27:00.921 }, 00:27:00.921 "driver_specific": { 00:27:00.921 "lvol": { 00:27:00.921 "lvol_store_uuid": "29e8e9e7-ffa9-4da1-9853-891d98ec08e1", 00:27:00.921 "base_bdev": "basen1", 00:27:00.921 "thin_provision": true, 00:27:00.921 "num_allocated_clusters": 0, 00:27:00.921 "snapshot": false, 00:27:00.921 "clone": false, 00:27:00.921 "esnap_clone": false 00:27:00.921 } 00:27:00.921 } 00:27:00.921 } 00:27:00.921 ]' 00:27:00.921 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:00.921 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:00.921 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:00.921 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:00.921 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:00.921 00:11:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:00.921 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:00.921 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:00.921 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:01.181 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:01.181 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:01.181 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:01.444 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:01.444 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:01.444 00:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d ae668d4d-8e6d-4783-8e73-2b0a7f7d399f -c cachen1p0 --l2p_dram_limit 2 00:27:01.444 [2024-11-19 00:11:08.110839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.444 [2024-11-19 00:11:08.111102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:01.444 [2024-11-19 00:11:08.111156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:01.444 [2024-11-19 00:11:08.111168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.444 [2024-11-19 00:11:08.111256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.444 [2024-11-19 00:11:08.111268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:01.444 [2024-11-19 00:11:08.111280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:27:01.444 [2024-11-19 00:11:08.111289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.444 [2024-11-19 00:11:08.111315] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:01.444 [2024-11-19 00:11:08.112151] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:01.444 [2024-11-19 00:11:08.112184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.444 [2024-11-19 00:11:08.112192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:01.444 [2024-11-19 00:11:08.112204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.872 ms 00:27:01.444 [2024-11-19 00:11:08.112212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.444 [2024-11-19 00:11:08.112253] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 2dcf8c8e-0d92-49dc-966e-87c5c8f7f5be 00:27:01.444 [2024-11-19 00:11:08.114084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.444 [2024-11-19 00:11:08.114274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:01.444 [2024-11-19 00:11:08.114299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:01.444 [2024-11-19 00:11:08.114310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.444 [2024-11-19 00:11:08.123351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.444 [2024-11-19 00:11:08.123402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:01.444 [2024-11-19 00:11:08.123416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.943 ms 00:27:01.444 [2024-11-19 00:11:08.123426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.444 [2024-11-19 00:11:08.123478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.444 [2024-11-19 00:11:08.123490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:01.444 [2024-11-19 00:11:08.123499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:01.444 [2024-11-19 00:11:08.123512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.444 [2024-11-19 00:11:08.123573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.444 [2024-11-19 00:11:08.123586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:01.444 [2024-11-19 00:11:08.123595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:01.444 [2024-11-19 00:11:08.123610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.444 [2024-11-19 00:11:08.123635] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:01.444 [2024-11-19 00:11:08.128191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.444 [2024-11-19 00:11:08.128237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:01.444 [2024-11-19 00:11:08.128252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.561 ms 00:27:01.444 [2024-11-19 00:11:08.128261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.444 [2024-11-19 00:11:08.128297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.444 [2024-11-19 00:11:08.128306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:01.444 [2024-11-19 00:11:08.128317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:01.445 [2024-11-19 00:11:08.128324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.445 [2024-11-19 00:11:08.128379] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:01.445 [2024-11-19 00:11:08.128525] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:01.445 [2024-11-19 00:11:08.128543] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:01.445 [2024-11-19 00:11:08.128555] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:01.445 [2024-11-19 00:11:08.128568] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:01.445 [2024-11-19 00:11:08.128577] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:01.445 [2024-11-19 00:11:08.128588] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:01.445 [2024-11-19 00:11:08.128596] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:01.445 [2024-11-19 00:11:08.128607] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:01.445 [2024-11-19 00:11:08.128615] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:01.445 [2024-11-19 00:11:08.128625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.445 [2024-11-19 00:11:08.128633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:01.445 [2024-11-19 00:11:08.128642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:27:01.445 [2024-11-19 00:11:08.128650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.445 [2024-11-19 00:11:08.128736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.445 [2024-11-19 00:11:08.128745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:01.445 [2024-11-19 00:11:08.128757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:01.445 [2024-11-19 00:11:08.128773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.445 [2024-11-19 00:11:08.128880] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:01.445 [2024-11-19 00:11:08.128889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:01.445 [2024-11-19 00:11:08.128901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:01.445 [2024-11-19 00:11:08.128909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.128919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:01.445 [2024-11-19 00:11:08.128926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.128935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:01.445 [2024-11-19 00:11:08.128943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:01.445 [2024-11-19 00:11:08.128953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:01.445 [2024-11-19 00:11:08.128959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.128968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:01.445 [2024-11-19 00:11:08.128977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:01.445 [2024-11-19 00:11:08.128985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.128992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:01.445 [2024-11-19 00:11:08.129001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:01.445 [2024-11-19 00:11:08.129008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.129020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:01.445 [2024-11-19 00:11:08.129027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:01.445 [2024-11-19 00:11:08.129036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.129043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:01.445 [2024-11-19 00:11:08.129052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:01.445 [2024-11-19 00:11:08.129059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:01.445 [2024-11-19 00:11:08.129068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:01.445 [2024-11-19 00:11:08.129075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:01.445 [2024-11-19 00:11:08.129083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:01.445 [2024-11-19 00:11:08.129092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:01.445 [2024-11-19 00:11:08.129102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:01.445 [2024-11-19 00:11:08.129109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:01.445 [2024-11-19 00:11:08.129117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:01.445 [2024-11-19 00:11:08.129151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:01.445 [2024-11-19 00:11:08.129161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:01.445 [2024-11-19 00:11:08.129167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:01.445 [2024-11-19 00:11:08.129178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:01.445 [2024-11-19 00:11:08.129186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.129196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:01.445 [2024-11-19 00:11:08.129202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:01.445 [2024-11-19 00:11:08.129211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.129218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:01.445 [2024-11-19 00:11:08.129227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:01.445 [2024-11-19 00:11:08.129234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.129243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:01.445 [2024-11-19 00:11:08.129251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:01.445 [2024-11-19 00:11:08.129260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.129267] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:01.445 [2024-11-19 00:11:08.129302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:01.445 [2024-11-19 00:11:08.129310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:01.445 [2024-11-19 00:11:08.129321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:01.445 [2024-11-19 00:11:08.129330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:01.445 [2024-11-19 00:11:08.129341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:01.445 [2024-11-19 00:11:08.129349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:01.445 [2024-11-19 00:11:08.129358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:01.445 [2024-11-19 00:11:08.129364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:01.445 [2024-11-19 00:11:08.129373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:01.445 [2024-11-19 00:11:08.129385] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:01.445 [2024-11-19 00:11:08.129397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:01.445 [2024-11-19 00:11:08.129409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:01.445 [2024-11-19 00:11:08.129419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:01.445 [2024-11-19 00:11:08.129434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:01.445 [2024-11-19 00:11:08.129446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:01.445 [2024-11-19 00:11:08.129454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:01.445 [2024-11-19 00:11:08.129463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:01.445 [2024-11-19 00:11:08.129470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:01.445 [2024-11-19 00:11:08.129480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:01.445 [2024-11-19 00:11:08.129488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:01.445 [2024-11-19 00:11:08.129499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:01.445 [2024-11-19 00:11:08.129506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:01.446 [2024-11-19 00:11:08.129516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:01.446 [2024-11-19 00:11:08.129522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:01.446 [2024-11-19 00:11:08.129533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:01.446 [2024-11-19 00:11:08.129540] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:01.446 [2024-11-19 00:11:08.129552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:01.446 [2024-11-19 00:11:08.129560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:01.446 [2024-11-19 00:11:08.129586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:01.446 [2024-11-19 00:11:08.129594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:01.446 [2024-11-19 00:11:08.129603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:01.446 [2024-11-19 00:11:08.129610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.446 [2024-11-19 00:11:08.129620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:01.446 [2024-11-19 00:11:08.129629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.802 ms 00:27:01.446 [2024-11-19 00:11:08.129639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.446 [2024-11-19 00:11:08.129680] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:01.446 [2024-11-19 00:11:08.129695] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:04.746 [2024-11-19 00:11:11.218176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.218269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:04.746 [2024-11-19 00:11:11.218288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3088.480 ms 00:27:04.746 [2024-11-19 00:11:11.218300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.250257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.250478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:04.746 [2024-11-19 00:11:11.250493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.701 ms 00:27:04.746 [2024-11-19 00:11:11.250505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.250597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.250611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:04.746 [2024-11-19 00:11:11.250621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:04.746 [2024-11-19 00:11:11.250635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.286394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.286450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:04.746 [2024-11-19 00:11:11.286462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.703 ms 00:27:04.746 [2024-11-19 00:11:11.286472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.286509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.286525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:04.746 [2024-11-19 00:11:11.286533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:04.746 [2024-11-19 00:11:11.286543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.287116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.287184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:04.746 [2024-11-19 00:11:11.287196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:27:04.746 [2024-11-19 00:11:11.287208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.287266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.287278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:04.746 [2024-11-19 00:11:11.287290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:04.746 [2024-11-19 00:11:11.287304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.305042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.305103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:04.746 [2024-11-19 00:11:11.305115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.718 ms 00:27:04.746 [2024-11-19 00:11:11.305147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.318658] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:04.746 [2024-11-19 00:11:11.320022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.320069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:04.746 [2024-11-19 00:11:11.320083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.776 ms 00:27:04.746 [2024-11-19 00:11:11.320091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.358646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.358708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:04.746 [2024-11-19 00:11:11.358729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.518 ms 00:27:04.746 [2024-11-19 00:11:11.358738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.358855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.358871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:04.746 [2024-11-19 00:11:11.358887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:27:04.746 [2024-11-19 00:11:11.358896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.384944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.384998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:04.746 [2024-11-19 00:11:11.385016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.968 ms 00:27:04.746 [2024-11-19 00:11:11.385024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.410761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.410971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:04.746 [2024-11-19 00:11:11.411000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.672 ms 00:27:04.746 [2024-11-19 00:11:11.411008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.746 [2024-11-19 00:11:11.411867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.746 [2024-11-19 00:11:11.411907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:04.746 [2024-11-19 00:11:11.411921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.560 ms 00:27:04.746 [2024-11-19 00:11:11.411929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.008 [2024-11-19 00:11:11.501267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.008 [2024-11-19 00:11:11.501497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:05.008 [2024-11-19 00:11:11.501531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 89.267 ms 00:27:05.008 [2024-11-19 00:11:11.501540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.008 [2024-11-19 00:11:11.529538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.008 [2024-11-19 00:11:11.529613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:05.008 [2024-11-19 00:11:11.529640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.882 ms 00:27:05.008 [2024-11-19 00:11:11.529649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.008 [2024-11-19 00:11:11.556271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.008 [2024-11-19 00:11:11.556475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:05.008 [2024-11-19 00:11:11.556504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.561 ms 00:27:05.008 [2024-11-19 00:11:11.556512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.008 [2024-11-19 00:11:11.583516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.008 [2024-11-19 00:11:11.583708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:05.008 [2024-11-19 00:11:11.583737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.860 ms 00:27:05.008 [2024-11-19 00:11:11.583745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.008 [2024-11-19 00:11:11.583969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.008 [2024-11-19 00:11:11.583997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:05.008 [2024-11-19 00:11:11.584014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:05.008 [2024-11-19 00:11:11.584023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.008 [2024-11-19 00:11:11.584170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.008 [2024-11-19 00:11:11.584182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:05.008 [2024-11-19 00:11:11.584197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:27:05.008 [2024-11-19 00:11:11.584205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.008 [2024-11-19 00:11:11.585453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3474.097 ms, result 0 00:27:05.008 { 00:27:05.008 "name": "ftl", 00:27:05.008 "uuid": "2dcf8c8e-0d92-49dc-966e-87c5c8f7f5be" 00:27:05.008 } 00:27:05.008 00:11:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:05.270 [2024-11-19 00:11:11.804466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.270 00:11:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:05.530 00:11:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:05.530 [2024-11-19 00:11:12.120668] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:05.530 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:05.791 [2024-11-19 00:11:12.321157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:05.791 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:06.052 Fill FTL, iteration 1 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80305 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80305 /var/tmp/spdk.tgt.sock 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80305 ']' 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:06.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.053 00:11:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.053 [2024-11-19 00:11:12.711295] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:06.053 [2024-11-19 00:11:12.711635] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80305 ] 00:27:06.312 [2024-11-19 00:11:12.876555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.312 [2024-11-19 00:11:12.997929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.255 00:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.255 00:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:07.255 00:11:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:07.255 ftln1 00:27:07.255 00:11:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:07.255 00:11:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80305 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80305 ']' 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80305 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80305 00:27:07.517 killing process with pid 80305 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80305' 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80305 00:27:07.517 00:11:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80305 00:27:08.903 00:11:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:08.903 00:11:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:08.903 [2024-11-19 00:11:15.591855] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:08.903 [2024-11-19 00:11:15.591967] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80353 ] 00:27:09.164 [2024-11-19 00:11:15.747446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.164 [2024-11-19 00:11:15.822139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.554  [2024-11-19T00:11:18.188Z] Copying: 254/1024 [MB] (254 MBps) [2024-11-19T00:11:19.133Z] Copying: 501/1024 [MB] (247 MBps) [2024-11-19T00:11:20.151Z] Copying: 721/1024 [MB] (220 MBps) [2024-11-19T00:11:20.718Z] Copying: 921/1024 [MB] (200 MBps) [2024-11-19T00:11:21.289Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:27:14.597 00:27:14.597 Calculate MD5 checksum, iteration 1 00:27:14.597 00:11:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:14.597 00:11:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:14.597 00:11:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:14.597 00:11:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:14.597 00:11:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:14.597 00:11:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:14.597 00:11:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:14.597 00:11:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:14.597 [2024-11-19 00:11:21.208564] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:14.597 [2024-11-19 00:11:21.209282] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80410 ] 00:27:14.856 [2024-11-19 00:11:21.365361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.856 [2024-11-19 00:11:21.474066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.233  [2024-11-19T00:11:23.491Z] Copying: 683/1024 [MB] (683 MBps) [2024-11-19T00:11:24.059Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:27:17.367 00:27:17.367 00:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:17.367 00:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:19.904 Fill FTL, iteration 2 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=701086096a78d97b96de8f675a1e87b5 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:19.904 00:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:19.904 [2024-11-19 00:11:26.133603] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:19.904 [2024-11-19 00:11:26.133732] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80467 ] 00:27:19.904 [2024-11-19 00:11:26.288259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.905 [2024-11-19 00:11:26.377686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.282  [2024-11-19T00:11:28.909Z] Copying: 239/1024 [MB] (239 MBps) [2024-11-19T00:11:29.843Z] Copying: 494/1024 [MB] (255 MBps) [2024-11-19T00:11:30.778Z] Copying: 727/1024 [MB] (233 MBps) [2024-11-19T00:11:31.036Z] Copying: 951/1024 [MB] (224 MBps) [2024-11-19T00:11:31.973Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:27:25.281 00:27:25.281 Calculate MD5 checksum, iteration 2 00:27:25.281 00:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:25.281 00:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:25.281 00:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:25.281 00:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:25.281 00:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:25.281 00:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:25.281 00:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:25.281 00:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:25.281 [2024-11-19 00:11:31.685378] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:25.281 [2024-11-19 00:11:31.685504] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80526 ] 00:27:25.281 [2024-11-19 00:11:31.843802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.542 [2024-11-19 00:11:31.994417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.927  [2024-11-19T00:11:34.557Z] Copying: 490/1024 [MB] (490 MBps) [2024-11-19T00:11:35.495Z] Copying: 1024/1024 [MB] (average 548 MBps) 00:27:28.803 00:27:28.803 00:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:28.803 00:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:30.720 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:30.720 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3d2c823ead611e844b69cea48edc0d6d 00:27:30.720 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:30.720 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:30.720 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:30.981 [2024-11-19 00:11:37.567904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.981 [2024-11-19 00:11:37.567946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:30.981 [2024-11-19 00:11:37.567957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:30.981 [2024-11-19 00:11:37.567963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.981 [2024-11-19 00:11:37.567981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.981 [2024-11-19 00:11:37.567989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:30.981 [2024-11-19 00:11:37.567995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:30.981 [2024-11-19 00:11:37.568003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.981 [2024-11-19 00:11:37.568018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.981 [2024-11-19 00:11:37.568025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:30.981 [2024-11-19 00:11:37.568032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:30.981 [2024-11-19 00:11:37.568037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.981 [2024-11-19 00:11:37.568085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.173 ms, result 0 00:27:30.981 true 00:27:30.981 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:31.242 { 00:27:31.242 "name": "ftl", 00:27:31.242 "properties": [ 00:27:31.242 { 00:27:31.242 "name": "superblock_version", 00:27:31.242 "value": 5, 00:27:31.242 "read-only": true 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "name": "base_device", 00:27:31.242 "bands": [ 00:27:31.242 { 00:27:31.242 "id": 0, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 1, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 2, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 3, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 4, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 5, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 6, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 7, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 8, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 9, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 10, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 11, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 12, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 13, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 14, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 15, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 16, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 17, 00:27:31.242 "state": "FREE", 00:27:31.242 "validity": 0.0 00:27:31.242 } 00:27:31.242 ], 00:27:31.242 "read-only": true 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "name": "cache_device", 00:27:31.242 "type": "bdev", 00:27:31.242 "chunks": [ 00:27:31.242 { 00:27:31.242 "id": 0, 00:27:31.242 "state": "INACTIVE", 00:27:31.242 "utilization": 0.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 1, 00:27:31.242 "state": "CLOSED", 00:27:31.242 "utilization": 1.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 2, 00:27:31.242 "state": "CLOSED", 00:27:31.242 "utilization": 1.0 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 3, 00:27:31.242 "state": "OPEN", 00:27:31.242 "utilization": 0.001953125 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "id": 4, 00:27:31.242 "state": "OPEN", 00:27:31.242 "utilization": 0.0 00:27:31.242 } 00:27:31.242 ], 00:27:31.242 "read-only": true 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "name": "verbose_mode", 00:27:31.242 "value": true, 00:27:31.242 "unit": "", 00:27:31.242 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:31.242 }, 00:27:31.242 { 00:27:31.242 "name": "prep_upgrade_on_shutdown", 00:27:31.242 "value": false, 00:27:31.242 "unit": "", 00:27:31.242 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:31.242 } 00:27:31.242 ] 00:27:31.242 } 00:27:31.242 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:31.503 [2024-11-19 00:11:37.976191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.503 [2024-11-19 00:11:37.976299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:31.503 [2024-11-19 00:11:37.976349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:31.503 [2024-11-19 00:11:37.976366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.503 [2024-11-19 00:11:37.976476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.503 [2024-11-19 00:11:37.976497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:31.503 [2024-11-19 00:11:37.976595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:31.503 [2024-11-19 00:11:37.976612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.503 [2024-11-19 00:11:37.976788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.503 [2024-11-19 00:11:37.976880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:31.503 [2024-11-19 00:11:37.976898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:31.503 [2024-11-19 00:11:37.976913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.504 [2024-11-19 00:11:37.977005] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.801 ms, result 0 00:27:31.504 true 00:27:31.504 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:31.504 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:31.504 00:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:31.504 00:11:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:31.504 00:11:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:31.504 00:11:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:31.766 [2024-11-19 00:11:38.348457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.766 [2024-11-19 00:11:38.348486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:31.766 [2024-11-19 00:11:38.348493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:31.766 [2024-11-19 00:11:38.348500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.766 [2024-11-19 00:11:38.348515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.766 [2024-11-19 00:11:38.348522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:31.766 [2024-11-19 00:11:38.348527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:31.766 [2024-11-19 00:11:38.348532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.766 [2024-11-19 00:11:38.348546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.766 [2024-11-19 00:11:38.348552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:31.766 [2024-11-19 00:11:38.348557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:31.766 [2024-11-19 00:11:38.348562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.766 [2024-11-19 00:11:38.348602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.134 ms, result 0 00:27:31.766 true 00:27:31.766 00:11:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:32.028 { 00:27:32.028 "name": "ftl", 00:27:32.028 "properties": [ 00:27:32.028 { 00:27:32.028 "name": "superblock_version", 00:27:32.028 "value": 5, 00:27:32.028 "read-only": true 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "name": "base_device", 00:27:32.028 "bands": [ 00:27:32.028 { 00:27:32.028 "id": 0, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 1, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 2, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 3, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 4, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 5, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 6, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 7, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 8, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 9, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 10, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 11, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 12, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 13, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 14, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 15, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 16, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 17, 00:27:32.028 "state": "FREE", 00:27:32.028 "validity": 0.0 00:27:32.028 } 00:27:32.028 ], 00:27:32.028 "read-only": true 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "name": "cache_device", 00:27:32.028 "type": "bdev", 00:27:32.028 "chunks": [ 00:27:32.028 { 00:27:32.028 "id": 0, 00:27:32.028 "state": "INACTIVE", 00:27:32.028 "utilization": 0.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 1, 00:27:32.028 "state": "CLOSED", 00:27:32.028 "utilization": 1.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 2, 00:27:32.028 "state": "CLOSED", 00:27:32.028 "utilization": 1.0 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 3, 00:27:32.028 "state": "OPEN", 00:27:32.028 "utilization": 0.001953125 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "id": 4, 00:27:32.028 "state": "OPEN", 00:27:32.028 "utilization": 0.0 00:27:32.028 } 00:27:32.028 ], 00:27:32.028 "read-only": true 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "name": "verbose_mode", 00:27:32.028 "value": true, 00:27:32.028 "unit": "", 00:27:32.028 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:32.028 }, 00:27:32.028 { 00:27:32.028 "name": "prep_upgrade_on_shutdown", 00:27:32.028 "value": true, 00:27:32.028 "unit": "", 00:27:32.028 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:32.028 } 00:27:32.028 ] 00:27:32.028 } 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80187 ]] 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80187 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80187 ']' 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80187 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80187 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80187' 00:27:32.029 killing process with pid 80187 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80187 00:27:32.029 00:11:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80187 00:27:32.601 [2024-11-19 00:11:39.124539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:32.601 [2024-11-19 00:11:39.134406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.601 [2024-11-19 00:11:39.134512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:32.601 [2024-11-19 00:11:39.134565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:32.601 [2024-11-19 00:11:39.134616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.601 [2024-11-19 00:11:39.134648] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:32.601 [2024-11-19 00:11:39.136678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.601 [2024-11-19 00:11:39.136759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:32.601 [2024-11-19 00:11:39.136817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.975 ms 00:27:32.601 [2024-11-19 00:11:39.136834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.610 [2024-11-19 00:11:47.916794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.610 [2024-11-19 00:11:47.916992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:42.610 [2024-11-19 00:11:47.917390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8779.897 ms 00:27:42.610 [2024-11-19 00:11:47.917439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.610 [2024-11-19 00:11:47.918632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.610 [2024-11-19 00:11:47.918753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:42.610 [2024-11-19 00:11:47.918811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.058 ms 00:27:42.610 [2024-11-19 00:11:47.918830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.610 [2024-11-19 00:11:47.919721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.610 [2024-11-19 00:11:47.919815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:42.610 [2024-11-19 00:11:47.919869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:27:42.610 [2024-11-19 00:11:47.919890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.610 [2024-11-19 00:11:47.928220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.610 [2024-11-19 00:11:47.928315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:42.610 [2024-11-19 00:11:47.928364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.273 ms 00:27:42.610 [2024-11-19 00:11:47.928373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.610 [2024-11-19 00:11:47.933846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.610 [2024-11-19 00:11:47.933884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:42.610 [2024-11-19 00:11:47.933893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.446 ms 00:27:42.611 [2024-11-19 00:11:47.933900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.611 [2024-11-19 00:11:47.933964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.611 [2024-11-19 00:11:47.933972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:42.611 [2024-11-19 00:11:47.933985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:42.611 [2024-11-19 00:11:47.933992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.611 [2024-11-19 00:11:47.941490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.611 [2024-11-19 00:11:47.941522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:42.611 [2024-11-19 00:11:47.941530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.485 ms 00:27:42.611 [2024-11-19 00:11:47.941536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.611 [2024-11-19 00:11:47.948967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.611 [2024-11-19 00:11:47.948998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:42.611 [2024-11-19 00:11:47.949005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.399 ms 00:27:42.611 [2024-11-19 00:11:47.949011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.611 [2024-11-19 00:11:47.956384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.611 [2024-11-19 00:11:47.956413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:42.611 [2024-11-19 00:11:47.956420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.343 ms 00:27:42.611 [2024-11-19 00:11:47.956426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.611 [2024-11-19 00:11:47.963791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.611 [2024-11-19 00:11:47.963899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:42.611 [2024-11-19 00:11:47.963911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.304 ms 00:27:42.611 [2024-11-19 00:11:47.963917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.611 [2024-11-19 00:11:47.963941] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:42.611 [2024-11-19 00:11:47.963952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:42.611 [2024-11-19 00:11:47.963960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:42.611 [2024-11-19 00:11:47.963974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:42.611 [2024-11-19 00:11:47.963980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:42.611 [2024-11-19 00:11:47.963986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:42.611 [2024-11-19 00:11:47.963992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:42.611 [2024-11-19 00:11:47.963997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:42.611 [2024-11-19 00:11:47.964003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:42.611 [2024-11-19 00:11:47.964009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:42.611 [2024-11-19 00:11:47.964015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:42.611 [2024-11-19 00:11:47.964021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:42.612 [2024-11-19 00:11:47.964027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:42.612 [2024-11-19 00:11:47.964033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:42.612 [2024-11-19 00:11:47.964039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:42.612 [2024-11-19 00:11:47.964044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:42.612 [2024-11-19 00:11:47.964050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:42.612 [2024-11-19 00:11:47.964056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:42.612 [2024-11-19 00:11:47.964061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:42.612 [2024-11-19 00:11:47.964069] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:42.612 [2024-11-19 00:11:47.964075] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2dcf8c8e-0d92-49dc-966e-87c5c8f7f5be 00:27:42.612 [2024-11-19 00:11:47.964080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:42.612 [2024-11-19 00:11:47.964086] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:42.612 [2024-11-19 00:11:47.964091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:42.612 [2024-11-19 00:11:47.964097] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:42.612 [2024-11-19 00:11:47.964103] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:42.612 [2024-11-19 00:11:47.964111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:42.612 [2024-11-19 00:11:47.964116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:42.612 [2024-11-19 00:11:47.964139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:42.612 [2024-11-19 00:11:47.964144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:42.612 [2024-11-19 00:11:47.964151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.612 [2024-11-19 00:11:47.964163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:42.612 [2024-11-19 00:11:47.964170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:27:42.612 [2024-11-19 00:11:47.964176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.612 [2024-11-19 00:11:47.974189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.612 [2024-11-19 00:11:47.974289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:42.612 [2024-11-19 00:11:47.974301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.999 ms 00:27:42.612 [2024-11-19 00:11:47.974312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.612 [2024-11-19 00:11:47.974591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.612 [2024-11-19 00:11:47.974603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:42.613 [2024-11-19 00:11:47.974610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:27:42.613 [2024-11-19 00:11:47.974617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.613 [2024-11-19 00:11:48.008326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.613 [2024-11-19 00:11:48.008352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:42.613 [2024-11-19 00:11:48.008364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.613 [2024-11-19 00:11:48.008370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.613 [2024-11-19 00:11:48.008391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.613 [2024-11-19 00:11:48.008398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:42.613 [2024-11-19 00:11:48.008404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.613 [2024-11-19 00:11:48.008410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.613 [2024-11-19 00:11:48.008470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.613 [2024-11-19 00:11:48.008478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:42.613 [2024-11-19 00:11:48.008484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.613 [2024-11-19 00:11:48.008490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.613 [2024-11-19 00:11:48.008504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.613 [2024-11-19 00:11:48.008510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:42.613 [2024-11-19 00:11:48.008516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.613 [2024-11-19 00:11:48.008522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.613 [2024-11-19 00:11:48.068558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.613 [2024-11-19 00:11:48.068689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:42.613 [2024-11-19 00:11:48.068701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.613 [2024-11-19 00:11:48.068712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.613 [2024-11-19 00:11:48.117337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.613 [2024-11-19 00:11:48.117367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:42.613 [2024-11-19 00:11:48.117375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.613 [2024-11-19 00:11:48.117382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.613 [2024-11-19 00:11:48.117431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.613 [2024-11-19 00:11:48.117439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:42.613 [2024-11-19 00:11:48.117445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.613 [2024-11-19 00:11:48.117451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.613 [2024-11-19 00:11:48.117496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.613 [2024-11-19 00:11:48.117503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:42.613 [2024-11-19 00:11:48.117509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.613 [2024-11-19 00:11:48.117516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.614 [2024-11-19 00:11:48.117584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.614 [2024-11-19 00:11:48.117591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:42.614 [2024-11-19 00:11:48.117597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.614 [2024-11-19 00:11:48.117602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.614 [2024-11-19 00:11:48.117625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.614 [2024-11-19 00:11:48.117634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:42.614 [2024-11-19 00:11:48.117640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.614 [2024-11-19 00:11:48.117645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.614 [2024-11-19 00:11:48.117673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.614 [2024-11-19 00:11:48.117680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:42.614 [2024-11-19 00:11:48.117685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.614 [2024-11-19 00:11:48.117692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.614 [2024-11-19 00:11:48.117726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.614 [2024-11-19 00:11:48.117733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:42.614 [2024-11-19 00:11:48.117739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.614 [2024-11-19 00:11:48.117745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.614 [2024-11-19 00:11:48.117842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8983.378 ms, result 0 00:27:45.162 00:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:45.162 00:11:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:45.162 00:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:45.162 00:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:45.162 00:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:45.162 00:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80725 00:27:45.163 00:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:45.163 00:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:45.163 00:11:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80725 00:27:45.163 00:11:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80725 ']' 00:27:45.163 00:11:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.163 00:11:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.163 00:11:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.163 00:11:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.163 00:11:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:45.163 [2024-11-19 00:11:51.783629] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:45.163 [2024-11-19 00:11:51.783916] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80725 ] 00:27:45.424 [2024-11-19 00:11:51.942601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.424 [2024-11-19 00:11:52.018488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.996 [2024-11-19 00:11:52.583431] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:45.996 [2024-11-19 00:11:52.583484] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.259 [2024-11-19 00:11:52.726324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.259 [2024-11-19 00:11:52.726358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:46.259 [2024-11-19 00:11:52.726368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:46.259 [2024-11-19 00:11:52.726375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.259 [2024-11-19 00:11:52.726410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.259 [2024-11-19 00:11:52.726418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:46.259 [2024-11-19 00:11:52.726424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:46.259 [2024-11-19 00:11:52.726430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.259 [2024-11-19 00:11:52.726448] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:46.260 [2024-11-19 00:11:52.726956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:46.260 [2024-11-19 00:11:52.726967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.726973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:46.260 [2024-11-19 00:11:52.726979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:27:46.260 [2024-11-19 00:11:52.726985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.727922] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:46.260 [2024-11-19 00:11:52.737523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.737551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:46.260 [2024-11-19 00:11:52.737563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.603 ms 00:27:46.260 [2024-11-19 00:11:52.737569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.737613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.737621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:46.260 [2024-11-19 00:11:52.737627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:46.260 [2024-11-19 00:11:52.737632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.742082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.742108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:46.260 [2024-11-19 00:11:52.742115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.400 ms 00:27:46.260 [2024-11-19 00:11:52.742137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.742181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.742187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:46.260 [2024-11-19 00:11:52.742193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:46.260 [2024-11-19 00:11:52.742199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.742234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.742243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:46.260 [2024-11-19 00:11:52.742250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:46.260 [2024-11-19 00:11:52.742256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.742271] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:46.260 [2024-11-19 00:11:52.744850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.744969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:46.260 [2024-11-19 00:11:52.744985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.582 ms 00:27:46.260 [2024-11-19 00:11:52.744991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.745017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.745023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:46.260 [2024-11-19 00:11:52.745029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:46.260 [2024-11-19 00:11:52.745035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.745049] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:46.260 [2024-11-19 00:11:52.745066] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:46.260 [2024-11-19 00:11:52.745092] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:46.260 [2024-11-19 00:11:52.745103] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:46.260 [2024-11-19 00:11:52.745190] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:46.260 [2024-11-19 00:11:52.745198] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:46.260 [2024-11-19 00:11:52.745206] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:46.260 [2024-11-19 00:11:52.745214] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:46.260 [2024-11-19 00:11:52.745223] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:46.260 [2024-11-19 00:11:52.745229] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:46.260 [2024-11-19 00:11:52.745234] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:46.260 [2024-11-19 00:11:52.745239] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:46.260 [2024-11-19 00:11:52.745245] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:46.260 [2024-11-19 00:11:52.745251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.745256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:46.260 [2024-11-19 00:11:52.745262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:27:46.260 [2024-11-19 00:11:52.745267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.745331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.260 [2024-11-19 00:11:52.745337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:46.260 [2024-11-19 00:11:52.745344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:46.260 [2024-11-19 00:11:52.745350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.260 [2024-11-19 00:11:52.745425] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:46.260 [2024-11-19 00:11:52.745432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:46.260 [2024-11-19 00:11:52.745438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.260 [2024-11-19 00:11:52.745444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.260 [2024-11-19 00:11:52.745449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:46.260 [2024-11-19 00:11:52.745454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:46.260 [2024-11-19 00:11:52.745459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:46.260 [2024-11-19 00:11:52.745464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:46.260 [2024-11-19 00:11:52.745470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:46.260 [2024-11-19 00:11:52.745476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.260 [2024-11-19 00:11:52.745481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:46.260 [2024-11-19 00:11:52.745486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:46.260 [2024-11-19 00:11:52.745491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.260 [2024-11-19 00:11:52.745496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:46.260 [2024-11-19 00:11:52.745502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:46.260 [2024-11-19 00:11:52.745507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.260 [2024-11-19 00:11:52.745512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:46.260 [2024-11-19 00:11:52.745518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:46.261 [2024-11-19 00:11:52.745523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.261 [2024-11-19 00:11:52.745528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:46.261 [2024-11-19 00:11:52.745533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:46.261 [2024-11-19 00:11:52.745537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.261 [2024-11-19 00:11:52.745542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:46.261 [2024-11-19 00:11:52.745547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:46.261 [2024-11-19 00:11:52.745552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.261 [2024-11-19 00:11:52.745561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:46.261 [2024-11-19 00:11:52.745566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:46.261 [2024-11-19 00:11:52.745571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.261 [2024-11-19 00:11:52.745576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:46.261 [2024-11-19 00:11:52.745581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:46.261 [2024-11-19 00:11:52.745585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.261 [2024-11-19 00:11:52.745590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:46.261 [2024-11-19 00:11:52.745595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:46.261 [2024-11-19 00:11:52.745600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.261 [2024-11-19 00:11:52.745605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:46.261 [2024-11-19 00:11:52.745609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:46.261 [2024-11-19 00:11:52.745614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.261 [2024-11-19 00:11:52.745619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:46.261 [2024-11-19 00:11:52.745624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:46.261 [2024-11-19 00:11:52.745628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.261 [2024-11-19 00:11:52.745633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:46.261 [2024-11-19 00:11:52.745638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:46.261 [2024-11-19 00:11:52.745643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.261 [2024-11-19 00:11:52.745648] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:46.261 [2024-11-19 00:11:52.745654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:46.261 [2024-11-19 00:11:52.745659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.261 [2024-11-19 00:11:52.745667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.261 [2024-11-19 00:11:52.745673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:46.261 [2024-11-19 00:11:52.745679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:46.261 [2024-11-19 00:11:52.745683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:46.261 [2024-11-19 00:11:52.745689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:46.261 [2024-11-19 00:11:52.745694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:46.261 [2024-11-19 00:11:52.745699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:46.261 [2024-11-19 00:11:52.745705] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:46.261 [2024-11-19 00:11:52.745711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:46.261 [2024-11-19 00:11:52.745723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:46.261 [2024-11-19 00:11:52.745739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:46.261 [2024-11-19 00:11:52.745744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:46.261 [2024-11-19 00:11:52.745749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:46.261 [2024-11-19 00:11:52.745755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:46.261 [2024-11-19 00:11:52.745791] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:46.261 [2024-11-19 00:11:52.745797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:46.261 [2024-11-19 00:11:52.745809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:46.261 [2024-11-19 00:11:52.745814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:46.261 [2024-11-19 00:11:52.745820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:46.261 [2024-11-19 00:11:52.745825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.261 [2024-11-19 00:11:52.745831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:46.261 [2024-11-19 00:11:52.745855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.453 ms 00:27:46.261 [2024-11-19 00:11:52.745862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.261 [2024-11-19 00:11:52.745896] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:46.261 [2024-11-19 00:11:52.745906] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:50.474 [2024-11-19 00:11:56.873112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.873200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:50.474 [2024-11-19 00:11:56.873218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4127.197 ms 00:27:50.474 [2024-11-19 00:11:56.873228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:56.904845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.904909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:50.474 [2024-11-19 00:11:56.904923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.356 ms 00:27:50.474 [2024-11-19 00:11:56.904932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:56.905038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.905050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:50.474 [2024-11-19 00:11:56.905060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:50.474 [2024-11-19 00:11:56.905068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:56.940402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.940450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:50.474 [2024-11-19 00:11:56.940467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.295 ms 00:27:50.474 [2024-11-19 00:11:56.940476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:56.940512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.940521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:50.474 [2024-11-19 00:11:56.940531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:50.474 [2024-11-19 00:11:56.940539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:56.941170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.941195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:50.474 [2024-11-19 00:11:56.941206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:27:50.474 [2024-11-19 00:11:56.941224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:56.941278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.941288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:50.474 [2024-11-19 00:11:56.941297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:50.474 [2024-11-19 00:11:56.941306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:56.959099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.959165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:50.474 [2024-11-19 00:11:56.959178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.772 ms 00:27:50.474 [2024-11-19 00:11:56.959186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:56.973616] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:50.474 [2024-11-19 00:11:56.973682] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:50.474 [2024-11-19 00:11:56.973697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.973706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:50.474 [2024-11-19 00:11:56.973716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.378 ms 00:27:50.474 [2024-11-19 00:11:56.973723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:56.988458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:56.988506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:50.474 [2024-11-19 00:11:56.988519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.678 ms 00:27:50.474 [2024-11-19 00:11:56.988528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.000952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.000999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:50.474 [2024-11-19 00:11:57.001011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.366 ms 00:27:50.474 [2024-11-19 00:11:57.001018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.013551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.013596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:50.474 [2024-11-19 00:11:57.013608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.483 ms 00:27:50.474 [2024-11-19 00:11:57.013614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.014346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.014373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:50.474 [2024-11-19 00:11:57.014383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.611 ms 00:27:50.474 [2024-11-19 00:11:57.014391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.086733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.087008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:50.474 [2024-11-19 00:11:57.087036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.315 ms 00:27:50.474 [2024-11-19 00:11:57.087047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.098659] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:50.474 [2024-11-19 00:11:57.099780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.099827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:50.474 [2024-11-19 00:11:57.099841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.380 ms 00:27:50.474 [2024-11-19 00:11:57.099850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.099967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.099984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:50.474 [2024-11-19 00:11:57.099995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:50.474 [2024-11-19 00:11:57.100004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.100068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.100080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:50.474 [2024-11-19 00:11:57.100089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:50.474 [2024-11-19 00:11:57.100097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.100143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.100154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:50.474 [2024-11-19 00:11:57.100167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:50.474 [2024-11-19 00:11:57.100176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.100215] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:50.474 [2024-11-19 00:11:57.100227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.100236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:50.474 [2024-11-19 00:11:57.100244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:50.474 [2024-11-19 00:11:57.100253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.125792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.126013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:50.474 [2024-11-19 00:11:57.126038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.516 ms 00:27:50.474 [2024-11-19 00:11:57.126047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.126459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.474 [2024-11-19 00:11:57.126498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:50.474 [2024-11-19 00:11:57.126511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:50.474 [2024-11-19 00:11:57.126521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.474 [2024-11-19 00:11:57.127992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4401.131 ms, result 0 00:27:50.474 [2024-11-19 00:11:57.142709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.474 [2024-11-19 00:11:57.158726] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:50.736 [2024-11-19 00:11:57.166895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:51.307 00:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.307 00:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:51.307 00:11:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:51.307 00:11:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:51.307 00:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:51.307 [2024-11-19 00:11:57.935653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.307 [2024-11-19 00:11:57.935715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:51.307 [2024-11-19 00:11:57.935732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:51.307 [2024-11-19 00:11:57.935745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.307 [2024-11-19 00:11:57.935770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.307 [2024-11-19 00:11:57.935780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:51.307 [2024-11-19 00:11:57.935789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.307 [2024-11-19 00:11:57.935797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.307 [2024-11-19 00:11:57.935819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.307 [2024-11-19 00:11:57.935828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:51.307 [2024-11-19 00:11:57.935837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.307 [2024-11-19 00:11:57.935845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.307 [2024-11-19 00:11:57.935910] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.246 ms, result 0 00:27:51.307 true 00:27:51.307 00:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:51.569 { 00:27:51.569 "name": "ftl", 00:27:51.569 "properties": [ 00:27:51.569 { 00:27:51.569 "name": "superblock_version", 00:27:51.569 "value": 5, 00:27:51.569 "read-only": true 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "name": "base_device", 00:27:51.569 "bands": [ 00:27:51.569 { 00:27:51.569 "id": 0, 00:27:51.569 "state": "CLOSED", 00:27:51.569 "validity": 1.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 1, 00:27:51.569 "state": "CLOSED", 00:27:51.569 "validity": 1.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 2, 00:27:51.569 "state": "CLOSED", 00:27:51.569 "validity": 0.007843137254901933 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 3, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 4, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 5, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 6, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 7, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 8, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 9, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 10, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 11, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 12, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 13, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 14, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 15, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 16, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 17, 00:27:51.569 "state": "FREE", 00:27:51.569 "validity": 0.0 00:27:51.569 } 00:27:51.569 ], 00:27:51.569 "read-only": true 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "name": "cache_device", 00:27:51.569 "type": "bdev", 00:27:51.569 "chunks": [ 00:27:51.569 { 00:27:51.569 "id": 0, 00:27:51.569 "state": "INACTIVE", 00:27:51.569 "utilization": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 1, 00:27:51.569 "state": "OPEN", 00:27:51.569 "utilization": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 2, 00:27:51.569 "state": "OPEN", 00:27:51.569 "utilization": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 3, 00:27:51.569 "state": "FREE", 00:27:51.569 "utilization": 0.0 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "id": 4, 00:27:51.569 "state": "FREE", 00:27:51.569 "utilization": 0.0 00:27:51.569 } 00:27:51.569 ], 00:27:51.569 "read-only": true 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "name": "verbose_mode", 00:27:51.569 "value": true, 00:27:51.569 "unit": "", 00:27:51.569 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:51.569 }, 00:27:51.569 { 00:27:51.569 "name": "prep_upgrade_on_shutdown", 00:27:51.569 "value": false, 00:27:51.569 "unit": "", 00:27:51.569 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:51.569 } 00:27:51.569 ] 00:27:51.569 } 00:27:51.569 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:51.569 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:51.569 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:51.831 Validate MD5 checksum, iteration 1 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:51.831 00:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:52.092 [2024-11-19 00:11:58.555752] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:52.092 [2024-11-19 00:11:58.556031] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80818 ] 00:27:52.092 [2024-11-19 00:11:58.716609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.353 [2024-11-19 00:11:58.835014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.825  [2024-11-19T00:12:01.463Z] Copying: 547/1024 [MB] (547 MBps) [2024-11-19T00:12:02.407Z] Copying: 1024/1024 [MB] (average 575 MBps) 00:27:55.715 00:27:55.715 00:12:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:55.715 00:12:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=701086096a78d97b96de8f675a1e87b5 00:27:58.263 Validate MD5 checksum, iteration 2 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 701086096a78d97b96de8f675a1e87b5 != \7\0\1\0\8\6\0\9\6\a\7\8\d\9\7\b\9\6\d\e\8\f\6\7\5\a\1\e\8\7\b\5 ]] 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:58.263 00:12:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.263 [2024-11-19 00:12:04.527850] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:58.263 [2024-11-19 00:12:04.528107] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80886 ] 00:27:58.263 [2024-11-19 00:12:04.682851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.263 [2024-11-19 00:12:04.758713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.648  [2024-11-19T00:12:06.910Z] Copying: 683/1024 [MB] (683 MBps) [2024-11-19T00:12:07.478Z] Copying: 1024/1024 [MB] (average 690 MBps) 00:28:00.786 00:28:00.786 00:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:00.786 00:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3d2c823ead611e844b69cea48edc0d6d 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3d2c823ead611e844b69cea48edc0d6d != \3\d\2\c\8\2\3\e\a\d\6\1\1\e\8\4\4\b\6\9\c\e\a\4\8\e\d\c\0\d\6\d ]] 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80725 ]] 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80725 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80937 00:28:03.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80937 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80937 ']' 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.337 00:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:03.337 [2024-11-19 00:12:09.547665] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:03.337 [2024-11-19 00:12:09.547784] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80937 ] 00:28:03.337 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80725 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:03.337 [2024-11-19 00:12:09.703356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.337 [2024-11-19 00:12:09.786547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.910 [2024-11-19 00:12:10.350892] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:03.910 [2024-11-19 00:12:10.351046] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:03.910 [2024-11-19 00:12:10.493714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.493826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:03.910 [2024-11-19 00:12:10.493842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:03.910 [2024-11-19 00:12:10.493849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.493896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.493904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:03.910 [2024-11-19 00:12:10.493910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:03.910 [2024-11-19 00:12:10.493916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.493936] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:03.910 [2024-11-19 00:12:10.494463] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:03.910 [2024-11-19 00:12:10.494479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.494485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:03.910 [2024-11-19 00:12:10.494492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:28:03.910 [2024-11-19 00:12:10.494497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.494722] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:03.910 [2024-11-19 00:12:10.507158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.507185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:03.910 [2024-11-19 00:12:10.507194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.436 ms 00:28:03.910 [2024-11-19 00:12:10.507201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.513988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.514013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:03.910 [2024-11-19 00:12:10.514023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:03.910 [2024-11-19 00:12:10.514029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.514280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.514289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:03.910 [2024-11-19 00:12:10.514295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:28:03.910 [2024-11-19 00:12:10.514300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.514337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.514345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:03.910 [2024-11-19 00:12:10.514351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:03.910 [2024-11-19 00:12:10.514357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.514377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.514383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:03.910 [2024-11-19 00:12:10.514389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:03.910 [2024-11-19 00:12:10.514395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.514409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:03.910 [2024-11-19 00:12:10.516633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.516746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:03.910 [2024-11-19 00:12:10.516758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.227 ms 00:28:03.910 [2024-11-19 00:12:10.516764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.516788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.910 [2024-11-19 00:12:10.516794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:03.910 [2024-11-19 00:12:10.516800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:03.910 [2024-11-19 00:12:10.516806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.910 [2024-11-19 00:12:10.516822] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:03.910 [2024-11-19 00:12:10.516837] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:03.910 [2024-11-19 00:12:10.516864] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:03.910 [2024-11-19 00:12:10.516877] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:03.910 [2024-11-19 00:12:10.516955] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:03.910 [2024-11-19 00:12:10.516963] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:03.910 [2024-11-19 00:12:10.516971] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:03.910 [2024-11-19 00:12:10.516978] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:03.910 [2024-11-19 00:12:10.516984] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:03.911 [2024-11-19 00:12:10.516990] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:03.911 [2024-11-19 00:12:10.516996] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:03.911 [2024-11-19 00:12:10.517001] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:03.911 [2024-11-19 00:12:10.517006] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:03.911 [2024-11-19 00:12:10.517011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.911 [2024-11-19 00:12:10.517018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:03.911 [2024-11-19 00:12:10.517024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:28:03.911 [2024-11-19 00:12:10.517029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.911 [2024-11-19 00:12:10.517093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.911 [2024-11-19 00:12:10.517099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:03.911 [2024-11-19 00:12:10.517105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:03.911 [2024-11-19 00:12:10.517110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.911 [2024-11-19 00:12:10.517194] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:03.911 [2024-11-19 00:12:10.517202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:03.911 [2024-11-19 00:12:10.517210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:03.911 [2024-11-19 00:12:10.517215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:03.911 [2024-11-19 00:12:10.517227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:03.911 [2024-11-19 00:12:10.517238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:03.911 [2024-11-19 00:12:10.517244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:03.911 [2024-11-19 00:12:10.517249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:03.911 [2024-11-19 00:12:10.517259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:03.911 [2024-11-19 00:12:10.517264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:03.911 [2024-11-19 00:12:10.517274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:03.911 [2024-11-19 00:12:10.517279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:03.911 [2024-11-19 00:12:10.517289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:03.911 [2024-11-19 00:12:10.517294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:03.911 [2024-11-19 00:12:10.517304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:03.911 [2024-11-19 00:12:10.517308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.911 [2024-11-19 00:12:10.517313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:03.911 [2024-11-19 00:12:10.517322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:03.911 [2024-11-19 00:12:10.517327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.911 [2024-11-19 00:12:10.517332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:03.911 [2024-11-19 00:12:10.517337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:03.911 [2024-11-19 00:12:10.517341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.911 [2024-11-19 00:12:10.517346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:03.911 [2024-11-19 00:12:10.517352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:03.911 [2024-11-19 00:12:10.517356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.911 [2024-11-19 00:12:10.517361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:03.911 [2024-11-19 00:12:10.517365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:03.911 [2024-11-19 00:12:10.517371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:03.911 [2024-11-19 00:12:10.517381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:03.911 [2024-11-19 00:12:10.517385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:03.911 [2024-11-19 00:12:10.517395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:03.911 [2024-11-19 00:12:10.517410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:03.911 [2024-11-19 00:12:10.517415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517420] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:03.911 [2024-11-19 00:12:10.517426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:03.911 [2024-11-19 00:12:10.517432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:03.911 [2024-11-19 00:12:10.517437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.911 [2024-11-19 00:12:10.517443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:03.911 [2024-11-19 00:12:10.517448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:03.911 [2024-11-19 00:12:10.517453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:03.911 [2024-11-19 00:12:10.517458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:03.911 [2024-11-19 00:12:10.517463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:03.911 [2024-11-19 00:12:10.517467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:03.911 [2024-11-19 00:12:10.517473] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:03.911 [2024-11-19 00:12:10.517480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:03.911 [2024-11-19 00:12:10.517492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:03.911 [2024-11-19 00:12:10.517507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:03.911 [2024-11-19 00:12:10.517512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:03.911 [2024-11-19 00:12:10.517517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:03.911 [2024-11-19 00:12:10.517522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:03.911 [2024-11-19 00:12:10.517559] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:03.911 [2024-11-19 00:12:10.517565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:03.911 [2024-11-19 00:12:10.517578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:03.911 [2024-11-19 00:12:10.517583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:03.911 [2024-11-19 00:12:10.517588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:03.911 [2024-11-19 00:12:10.517594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.911 [2024-11-19 00:12:10.517601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:03.911 [2024-11-19 00:12:10.517606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.452 ms 00:28:03.911 [2024-11-19 00:12:10.517611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.911 [2024-11-19 00:12:10.536610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.911 [2024-11-19 00:12:10.536636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:03.911 [2024-11-19 00:12:10.536644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.959 ms 00:28:03.911 [2024-11-19 00:12:10.536650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.911 [2024-11-19 00:12:10.536679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.911 [2024-11-19 00:12:10.536686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:03.912 [2024-11-19 00:12:10.536692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:03.912 [2024-11-19 00:12:10.536698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.912 [2024-11-19 00:12:10.560539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.912 [2024-11-19 00:12:10.560566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:03.912 [2024-11-19 00:12:10.560573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.799 ms 00:28:03.912 [2024-11-19 00:12:10.560579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.912 [2024-11-19 00:12:10.560600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.912 [2024-11-19 00:12:10.560607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:03.912 [2024-11-19 00:12:10.560613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:03.912 [2024-11-19 00:12:10.560619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.912 [2024-11-19 00:12:10.560689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.912 [2024-11-19 00:12:10.560697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:03.912 [2024-11-19 00:12:10.560703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:03.912 [2024-11-19 00:12:10.560709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.912 [2024-11-19 00:12:10.560737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.912 [2024-11-19 00:12:10.560744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:03.912 [2024-11-19 00:12:10.560750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:03.912 [2024-11-19 00:12:10.560756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.912 [2024-11-19 00:12:10.572001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.912 [2024-11-19 00:12:10.572105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:03.912 [2024-11-19 00:12:10.572117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.230 ms 00:28:03.912 [2024-11-19 00:12:10.572138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.912 [2024-11-19 00:12:10.572215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.912 [2024-11-19 00:12:10.572223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:03.912 [2024-11-19 00:12:10.572230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:03.912 [2024-11-19 00:12:10.572235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.173 [2024-11-19 00:12:10.600483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.173 [2024-11-19 00:12:10.600524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:04.173 [2024-11-19 00:12:10.600539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.233 ms 00:28:04.173 [2024-11-19 00:12:10.600548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.173 [2024-11-19 00:12:10.609487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.173 [2024-11-19 00:12:10.609513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:04.173 [2024-11-19 00:12:10.609525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:28:04.173 [2024-11-19 00:12:10.609531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.173 [2024-11-19 00:12:10.652825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.173 [2024-11-19 00:12:10.652860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:04.173 [2024-11-19 00:12:10.652874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.252 ms 00:28:04.173 [2024-11-19 00:12:10.652880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.173 [2024-11-19 00:12:10.652984] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:04.173 [2024-11-19 00:12:10.653058] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:04.173 [2024-11-19 00:12:10.653143] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:04.173 [2024-11-19 00:12:10.653212] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:04.173 [2024-11-19 00:12:10.653219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.173 [2024-11-19 00:12:10.653225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:04.173 [2024-11-19 00:12:10.653232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:28:04.173 [2024-11-19 00:12:10.653239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.173 [2024-11-19 00:12:10.653280] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:04.173 [2024-11-19 00:12:10.653289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.173 [2024-11-19 00:12:10.653298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:04.173 [2024-11-19 00:12:10.653305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:04.173 [2024-11-19 00:12:10.653310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.173 [2024-11-19 00:12:10.664851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.173 [2024-11-19 00:12:10.664879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:04.173 [2024-11-19 00:12:10.664888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.524 ms 00:28:04.173 [2024-11-19 00:12:10.664894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.173 [2024-11-19 00:12:10.671194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.173 [2024-11-19 00:12:10.671219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:04.173 [2024-11-19 00:12:10.671227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:04.173 [2024-11-19 00:12:10.671233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.173 [2024-11-19 00:12:10.671292] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:04.173 [2024-11-19 00:12:10.671399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.174 [2024-11-19 00:12:10.671408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:04.174 [2024-11-19 00:12:10.671415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.107 ms 00:28:04.174 [2024-11-19 00:12:10.671421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.117 [2024-11-19 00:12:11.436631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.118 [2024-11-19 00:12:11.436701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:05.118 [2024-11-19 00:12:11.436715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 764.555 ms 00:28:05.118 [2024-11-19 00:12:11.436724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.118 [2024-11-19 00:12:11.441048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.118 [2024-11-19 00:12:11.441083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:05.118 [2024-11-19 00:12:11.441093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.285 ms 00:28:05.118 [2024-11-19 00:12:11.441101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.118 [2024-11-19 00:12:11.442032] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:05.118 [2024-11-19 00:12:11.442066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.118 [2024-11-19 00:12:11.442074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:05.118 [2024-11-19 00:12:11.442083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.915 ms 00:28:05.118 [2024-11-19 00:12:11.442091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.118 [2024-11-19 00:12:11.442249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.118 [2024-11-19 00:12:11.442266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:05.118 [2024-11-19 00:12:11.442275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.130 ms 00:28:05.118 [2024-11-19 00:12:11.442282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.118 [2024-11-19 00:12:11.442341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 771.027 ms, result 0 00:28:05.118 [2024-11-19 00:12:11.442379] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:05.118 [2024-11-19 00:12:11.442448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.118 [2024-11-19 00:12:11.442458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:05.118 [2024-11-19 00:12:11.442465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:28:05.118 [2024-11-19 00:12:11.442473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.434617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.434705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:06.061 [2024-11-19 00:12:12.434722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 991.231 ms 00:28:06.061 [2024-11-19 00:12:12.434731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.439244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.439314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:06.061 [2024-11-19 00:12:12.439326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.353 ms 00:28:06.061 [2024-11-19 00:12:12.439335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.439754] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:06.061 [2024-11-19 00:12:12.439790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.439801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:06.061 [2024-11-19 00:12:12.439810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:28:06.061 [2024-11-19 00:12:12.439818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.440237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.440295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:06.061 [2024-11-19 00:12:12.440309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:06.061 [2024-11-19 00:12:12.440317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.440375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 997.981 ms, result 0 00:28:06.061 [2024-11-19 00:12:12.440424] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:06.061 [2024-11-19 00:12:12.440437] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:06.061 [2024-11-19 00:12:12.440448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.440458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:06.061 [2024-11-19 00:12:12.440468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1769.164 ms 00:28:06.061 [2024-11-19 00:12:12.440476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.440506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.440515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:06.061 [2024-11-19 00:12:12.440529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:06.061 [2024-11-19 00:12:12.440538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.452860] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:06.061 [2024-11-19 00:12:12.452995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.453006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:06.061 [2024-11-19 00:12:12.453017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.440 ms 00:28:06.061 [2024-11-19 00:12:12.453025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.453765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.453792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:06.061 [2024-11-19 00:12:12.453805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.654 ms 00:28:06.061 [2024-11-19 00:12:12.453814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.456105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.456141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:06.061 [2024-11-19 00:12:12.456151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.273 ms 00:28:06.061 [2024-11-19 00:12:12.456158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.456200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.456210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:06.061 [2024-11-19 00:12:12.456218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:06.061 [2024-11-19 00:12:12.456231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.456343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.456353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:06.061 [2024-11-19 00:12:12.456361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:06.061 [2024-11-19 00:12:12.456369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.456390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.456398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:06.061 [2024-11-19 00:12:12.456407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:06.061 [2024-11-19 00:12:12.456414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.456444] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:06.061 [2024-11-19 00:12:12.456456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.456465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:06.061 [2024-11-19 00:12:12.456474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:06.061 [2024-11-19 00:12:12.456482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.456538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.061 [2024-11-19 00:12:12.456548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:06.061 [2024-11-19 00:12:12.456555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:06.061 [2024-11-19 00:12:12.456563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.061 [2024-11-19 00:12:12.457908] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1963.686 ms, result 0 00:28:06.061 [2024-11-19 00:12:12.473530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.061 [2024-11-19 00:12:12.489522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:06.061 [2024-11-19 00:12:12.499024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:06.061 Validate MD5 checksum, iteration 1 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:06.061 00:12:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:06.061 [2024-11-19 00:12:12.618870] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:06.061 [2024-11-19 00:12:12.619013] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80973 ] 00:28:06.323 [2024-11-19 00:12:12.782255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.323 [2024-11-19 00:12:12.893269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.226  [2024-11-19T00:12:15.178Z] Copying: 669/1024 [MB] (669 MBps) [2024-11-19T00:12:19.382Z] Copying: 1024/1024 [MB] (average 671 MBps) 00:28:12.690 00:28:12.690 00:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:12.690 00:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:15.235 Validate MD5 checksum, iteration 2 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=701086096a78d97b96de8f675a1e87b5 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 701086096a78d97b96de8f675a1e87b5 != \7\0\1\0\8\6\0\9\6\a\7\8\d\9\7\b\9\6\d\e\8\f\6\7\5\a\1\e\8\7\b\5 ]] 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:15.235 00:12:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:15.235 [2024-11-19 00:12:21.376170] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:15.235 [2024-11-19 00:12:21.376285] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81071 ] 00:28:15.235 [2024-11-19 00:12:21.529203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.235 [2024-11-19 00:12:21.615832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.611  [2024-11-19T00:12:23.871Z] Copying: 580/1024 [MB] (580 MBps) [2024-11-19T00:12:26.415Z] Copying: 1024/1024 [MB] (average 583 MBps) 00:28:19.723 00:28:19.723 00:12:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:19.723 00:12:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:21.637 00:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:21.637 00:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3d2c823ead611e844b69cea48edc0d6d 00:28:21.637 00:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3d2c823ead611e844b69cea48edc0d6d != \3\d\2\c\8\2\3\e\a\d\6\1\1\e\8\4\4\b\6\9\c\e\a\4\8\e\d\c\0\d\6\d ]] 00:28:21.637 00:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:21.637 00:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:21.637 00:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:21.637 00:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:21.637 00:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:21.637 00:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80937 ]] 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80937 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80937 ']' 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80937 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80937 00:28:21.637 killing process with pid 80937 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80937' 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80937 00:28:21.637 00:12:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80937 00:28:21.898 [2024-11-19 00:12:28.557855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:21.898 [2024-11-19 00:12:28.568401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.898 [2024-11-19 00:12:28.568435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:21.898 [2024-11-19 00:12:28.568446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:21.898 [2024-11-19 00:12:28.568453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.898 [2024-11-19 00:12:28.568469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:21.898 [2024-11-19 00:12:28.570613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.898 [2024-11-19 00:12:28.570638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:21.898 [2024-11-19 00:12:28.570646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.133 ms 00:28:21.898 [2024-11-19 00:12:28.570656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.898 [2024-11-19 00:12:28.570830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.898 [2024-11-19 00:12:28.570839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:21.898 [2024-11-19 00:12:28.570845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.157 ms 00:28:21.898 [2024-11-19 00:12:28.570851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.898 [2024-11-19 00:12:28.572258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.898 [2024-11-19 00:12:28.572283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:21.898 [2024-11-19 00:12:28.572290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.396 ms 00:28:21.898 [2024-11-19 00:12:28.572296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.898 [2024-11-19 00:12:28.573172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.898 [2024-11-19 00:12:28.573190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:21.898 [2024-11-19 00:12:28.573197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.850 ms 00:28:21.898 [2024-11-19 00:12:28.573204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.898 [2024-11-19 00:12:28.581215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.898 [2024-11-19 00:12:28.581241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:21.898 [2024-11-19 00:12:28.581249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.986 ms 00:28:21.898 [2024-11-19 00:12:28.581258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.898 [2024-11-19 00:12:28.585635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.898 [2024-11-19 00:12:28.585666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:21.898 [2024-11-19 00:12:28.585675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.351 ms 00:28:21.898 [2024-11-19 00:12:28.585681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.898 [2024-11-19 00:12:28.585736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.898 [2024-11-19 00:12:28.585744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:21.898 [2024-11-19 00:12:28.585750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:21.898 [2024-11-19 00:12:28.585756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.159 [2024-11-19 00:12:28.593860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.159 [2024-11-19 00:12:28.593885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:22.159 [2024-11-19 00:12:28.593892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.088 ms 00:28:22.159 [2024-11-19 00:12:28.593898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.159 [2024-11-19 00:12:28.601744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.159 [2024-11-19 00:12:28.601768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:22.159 [2024-11-19 00:12:28.601775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.822 ms 00:28:22.159 [2024-11-19 00:12:28.601781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.159 [2024-11-19 00:12:28.609315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.159 [2024-11-19 00:12:28.609340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:22.159 [2024-11-19 00:12:28.609347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.511 ms 00:28:22.159 [2024-11-19 00:12:28.609352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.159 [2024-11-19 00:12:28.616887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.159 [2024-11-19 00:12:28.616912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:22.159 [2024-11-19 00:12:28.616919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.493 ms 00:28:22.159 [2024-11-19 00:12:28.616924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.159 [2024-11-19 00:12:28.616948] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:22.159 [2024-11-19 00:12:28.616958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:22.159 [2024-11-19 00:12:28.616965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:22.159 [2024-11-19 00:12:28.616972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:22.159 [2024-11-19 00:12:28.616978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.616984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.616989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.616995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:22.159 [2024-11-19 00:12:28.617064] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:22.159 [2024-11-19 00:12:28.617071] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2dcf8c8e-0d92-49dc-966e-87c5c8f7f5be 00:28:22.159 [2024-11-19 00:12:28.617077] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:22.159 [2024-11-19 00:12:28.617083] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:22.159 [2024-11-19 00:12:28.617089] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:22.159 [2024-11-19 00:12:28.617094] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:22.159 [2024-11-19 00:12:28.617100] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:22.159 [2024-11-19 00:12:28.617105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:22.159 [2024-11-19 00:12:28.617110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:22.159 [2024-11-19 00:12:28.617115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:22.159 [2024-11-19 00:12:28.617128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:22.159 [2024-11-19 00:12:28.617134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.159 [2024-11-19 00:12:28.617143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:22.159 [2024-11-19 00:12:28.617150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:28:22.159 [2024-11-19 00:12:28.617156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.626892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.160 [2024-11-19 00:12:28.626916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:22.160 [2024-11-19 00:12:28.626923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.724 ms 00:28:22.160 [2024-11-19 00:12:28.626929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.627205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.160 [2024-11-19 00:12:28.627219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:22.160 [2024-11-19 00:12:28.627225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:28:22.160 [2024-11-19 00:12:28.627231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.660056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.660083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:22.160 [2024-11-19 00:12:28.660091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.660098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.660135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.660143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:22.160 [2024-11-19 00:12:28.660149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.660155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.660212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.660220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:22.160 [2024-11-19 00:12:28.660227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.660233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.660245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.660253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:22.160 [2024-11-19 00:12:28.660259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.660268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.719530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.719561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:22.160 [2024-11-19 00:12:28.719570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.719577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.768132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.768168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:22.160 [2024-11-19 00:12:28.768176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.768182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.768232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.768239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:22.160 [2024-11-19 00:12:28.768246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.768251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.768292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.768300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:22.160 [2024-11-19 00:12:28.768309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.768320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.768388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.768396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:22.160 [2024-11-19 00:12:28.768403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.768408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.768434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.768441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:22.160 [2024-11-19 00:12:28.768447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.768455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.768483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.768489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:22.160 [2024-11-19 00:12:28.768495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.768501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.768532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.160 [2024-11-19 00:12:28.768539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:22.160 [2024-11-19 00:12:28.768547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.160 [2024-11-19 00:12:28.768553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.160 [2024-11-19 00:12:28.768638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 200.216 ms, result 0 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:22.735 Remove shared memory files 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80725 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:22.735 00:28:22.735 real 1m24.886s 00:28:22.735 user 1m55.623s 00:28:22.735 sys 0m19.585s 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.735 ************************************ 00:28:22.735 END TEST ftl_upgrade_shutdown 00:28:22.735 ************************************ 00:28:22.735 00:12:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:22.995 00:12:29 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:22.995 00:12:29 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:22.995 00:12:29 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:22.995 00:12:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.995 00:12:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:22.995 ************************************ 00:28:22.995 START TEST ftl_restore_fast 00:28:22.995 ************************************ 00:28:22.995 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:22.995 * Looking for test storage... 00:28:22.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:22.995 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:22.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.996 --rc genhtml_branch_coverage=1 00:28:22.996 --rc genhtml_function_coverage=1 00:28:22.996 --rc genhtml_legend=1 00:28:22.996 --rc geninfo_all_blocks=1 00:28:22.996 --rc geninfo_unexecuted_blocks=1 00:28:22.996 00:28:22.996 ' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:22.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.996 --rc genhtml_branch_coverage=1 00:28:22.996 --rc genhtml_function_coverage=1 00:28:22.996 --rc genhtml_legend=1 00:28:22.996 --rc geninfo_all_blocks=1 00:28:22.996 --rc geninfo_unexecuted_blocks=1 00:28:22.996 00:28:22.996 ' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:22.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.996 --rc genhtml_branch_coverage=1 00:28:22.996 --rc genhtml_function_coverage=1 00:28:22.996 --rc genhtml_legend=1 00:28:22.996 --rc geninfo_all_blocks=1 00:28:22.996 --rc geninfo_unexecuted_blocks=1 00:28:22.996 00:28:22.996 ' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:22.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.996 --rc genhtml_branch_coverage=1 00:28:22.996 --rc genhtml_function_coverage=1 00:28:22.996 --rc genhtml_legend=1 00:28:22.996 --rc geninfo_all_blocks=1 00:28:22.996 --rc geninfo_unexecuted_blocks=1 00:28:22.996 00:28:22.996 ' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.j1YwVHsjnB 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81236 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81236 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 81236 ']' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.996 00:12:29 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:23.258 [2024-11-19 00:12:29.724046] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:23.258 [2024-11-19 00:12:29.724189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81236 ] 00:28:23.258 [2024-11-19 00:12:29.880594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.519 [2024-11-19 00:12:29.957829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.092 00:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.092 00:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:28:24.092 00:12:30 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:24.092 00:12:30 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:24.092 00:12:30 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:24.092 00:12:30 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:24.092 00:12:30 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:24.092 00:12:30 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:24.354 00:12:30 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:24.354 00:12:30 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:24.354 00:12:30 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:24.354 00:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:24.354 00:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:24.354 00:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:24.354 00:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:24.354 00:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:24.354 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:24.354 { 00:28:24.354 "name": "nvme0n1", 00:28:24.354 "aliases": [ 00:28:24.354 "67600a13-1d8a-4cfe-94f6-9dd2e7b46ef6" 00:28:24.354 ], 00:28:24.354 "product_name": "NVMe disk", 00:28:24.354 "block_size": 4096, 00:28:24.354 "num_blocks": 1310720, 00:28:24.354 "uuid": "67600a13-1d8a-4cfe-94f6-9dd2e7b46ef6", 00:28:24.354 "numa_id": -1, 00:28:24.354 "assigned_rate_limits": { 00:28:24.354 "rw_ios_per_sec": 0, 00:28:24.354 "rw_mbytes_per_sec": 0, 00:28:24.354 "r_mbytes_per_sec": 0, 00:28:24.354 "w_mbytes_per_sec": 0 00:28:24.354 }, 00:28:24.354 "claimed": true, 00:28:24.354 "claim_type": "read_many_write_one", 00:28:24.354 "zoned": false, 00:28:24.354 "supported_io_types": { 00:28:24.354 "read": true, 00:28:24.354 "write": true, 00:28:24.354 "unmap": true, 00:28:24.354 "flush": true, 00:28:24.354 "reset": true, 00:28:24.354 "nvme_admin": true, 00:28:24.354 "nvme_io": true, 00:28:24.354 "nvme_io_md": false, 00:28:24.354 "write_zeroes": true, 00:28:24.354 "zcopy": false, 00:28:24.354 "get_zone_info": false, 00:28:24.354 "zone_management": false, 00:28:24.354 "zone_append": false, 00:28:24.354 "compare": true, 00:28:24.354 "compare_and_write": false, 00:28:24.354 "abort": true, 00:28:24.354 "seek_hole": false, 00:28:24.354 "seek_data": false, 00:28:24.354 "copy": true, 00:28:24.354 "nvme_iov_md": false 00:28:24.354 }, 00:28:24.354 "driver_specific": { 00:28:24.354 "nvme": [ 00:28:24.354 { 00:28:24.354 "pci_address": "0000:00:11.0", 00:28:24.354 "trid": { 00:28:24.354 "trtype": "PCIe", 00:28:24.354 "traddr": "0000:00:11.0" 00:28:24.354 }, 00:28:24.354 "ctrlr_data": { 00:28:24.354 "cntlid": 0, 00:28:24.354 "vendor_id": "0x1b36", 00:28:24.354 "model_number": "QEMU NVMe Ctrl", 00:28:24.354 "serial_number": "12341", 00:28:24.354 "firmware_revision": "8.0.0", 00:28:24.354 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:24.354 "oacs": { 00:28:24.354 "security": 0, 00:28:24.354 "format": 1, 00:28:24.354 "firmware": 0, 00:28:24.354 "ns_manage": 1 00:28:24.354 }, 00:28:24.354 "multi_ctrlr": false, 00:28:24.354 "ana_reporting": false 00:28:24.354 }, 00:28:24.354 "vs": { 00:28:24.354 "nvme_version": "1.4" 00:28:24.354 }, 00:28:24.354 "ns_data": { 00:28:24.354 "id": 1, 00:28:24.354 "can_share": false 00:28:24.354 } 00:28:24.354 } 00:28:24.354 ], 00:28:24.354 "mp_policy": "active_passive" 00:28:24.354 } 00:28:24.354 } 00:28:24.354 ]' 00:28:24.354 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:24.354 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:24.354 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=29e8e9e7-ffa9-4da1-9853-891d98ec08e1 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:24.615 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 29e8e9e7-ffa9-4da1-9853-891d98ec08e1 00:28:24.876 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:25.138 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=d723bf6c-6a2e-46e7-82f0-82d268e1ba1e 00:28:25.138 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d723bf6c-6a2e-46e7-82f0-82d268e1ba1e 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:25.399 00:12:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:25.660 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:25.660 { 00:28:25.660 "name": "8517cfbd-16a7-41e6-a01b-314be4d9da16", 00:28:25.660 "aliases": [ 00:28:25.660 "lvs/nvme0n1p0" 00:28:25.660 ], 00:28:25.660 "product_name": "Logical Volume", 00:28:25.660 "block_size": 4096, 00:28:25.660 "num_blocks": 26476544, 00:28:25.660 "uuid": "8517cfbd-16a7-41e6-a01b-314be4d9da16", 00:28:25.660 "assigned_rate_limits": { 00:28:25.660 "rw_ios_per_sec": 0, 00:28:25.660 "rw_mbytes_per_sec": 0, 00:28:25.660 "r_mbytes_per_sec": 0, 00:28:25.660 "w_mbytes_per_sec": 0 00:28:25.660 }, 00:28:25.660 "claimed": false, 00:28:25.660 "zoned": false, 00:28:25.660 "supported_io_types": { 00:28:25.660 "read": true, 00:28:25.660 "write": true, 00:28:25.660 "unmap": true, 00:28:25.660 "flush": false, 00:28:25.660 "reset": true, 00:28:25.660 "nvme_admin": false, 00:28:25.661 "nvme_io": false, 00:28:25.661 "nvme_io_md": false, 00:28:25.661 "write_zeroes": true, 00:28:25.661 "zcopy": false, 00:28:25.661 "get_zone_info": false, 00:28:25.661 "zone_management": false, 00:28:25.661 "zone_append": false, 00:28:25.661 "compare": false, 00:28:25.661 "compare_and_write": false, 00:28:25.661 "abort": false, 00:28:25.661 "seek_hole": true, 00:28:25.661 "seek_data": true, 00:28:25.661 "copy": false, 00:28:25.661 "nvme_iov_md": false 00:28:25.661 }, 00:28:25.661 "driver_specific": { 00:28:25.661 "lvol": { 00:28:25.661 "lvol_store_uuid": "d723bf6c-6a2e-46e7-82f0-82d268e1ba1e", 00:28:25.661 "base_bdev": "nvme0n1", 00:28:25.661 "thin_provision": true, 00:28:25.661 "num_allocated_clusters": 0, 00:28:25.661 "snapshot": false, 00:28:25.661 "clone": false, 00:28:25.661 "esnap_clone": false 00:28:25.661 } 00:28:25.661 } 00:28:25.661 } 00:28:25.661 ]' 00:28:25.661 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:25.661 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:25.661 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:25.661 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:25.661 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:25.661 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:25.661 00:12:32 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:25.661 00:12:32 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:25.661 00:12:32 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:25.923 00:12:32 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:25.923 00:12:32 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:25.923 00:12:32 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:25.923 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:25.923 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:25.923 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:25.923 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:25.923 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:26.185 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:26.185 { 00:28:26.185 "name": "8517cfbd-16a7-41e6-a01b-314be4d9da16", 00:28:26.185 "aliases": [ 00:28:26.185 "lvs/nvme0n1p0" 00:28:26.185 ], 00:28:26.185 "product_name": "Logical Volume", 00:28:26.185 "block_size": 4096, 00:28:26.185 "num_blocks": 26476544, 00:28:26.185 "uuid": "8517cfbd-16a7-41e6-a01b-314be4d9da16", 00:28:26.185 "assigned_rate_limits": { 00:28:26.185 "rw_ios_per_sec": 0, 00:28:26.185 "rw_mbytes_per_sec": 0, 00:28:26.185 "r_mbytes_per_sec": 0, 00:28:26.185 "w_mbytes_per_sec": 0 00:28:26.185 }, 00:28:26.185 "claimed": false, 00:28:26.185 "zoned": false, 00:28:26.185 "supported_io_types": { 00:28:26.185 "read": true, 00:28:26.185 "write": true, 00:28:26.185 "unmap": true, 00:28:26.185 "flush": false, 00:28:26.185 "reset": true, 00:28:26.185 "nvme_admin": false, 00:28:26.185 "nvme_io": false, 00:28:26.185 "nvme_io_md": false, 00:28:26.185 "write_zeroes": true, 00:28:26.185 "zcopy": false, 00:28:26.185 "get_zone_info": false, 00:28:26.185 "zone_management": false, 00:28:26.185 "zone_append": false, 00:28:26.185 "compare": false, 00:28:26.185 "compare_and_write": false, 00:28:26.185 "abort": false, 00:28:26.185 "seek_hole": true, 00:28:26.185 "seek_data": true, 00:28:26.185 "copy": false, 00:28:26.185 "nvme_iov_md": false 00:28:26.185 }, 00:28:26.185 "driver_specific": { 00:28:26.185 "lvol": { 00:28:26.185 "lvol_store_uuid": "d723bf6c-6a2e-46e7-82f0-82d268e1ba1e", 00:28:26.185 "base_bdev": "nvme0n1", 00:28:26.185 "thin_provision": true, 00:28:26.185 "num_allocated_clusters": 0, 00:28:26.185 "snapshot": false, 00:28:26.185 "clone": false, 00:28:26.185 "esnap_clone": false 00:28:26.185 } 00:28:26.185 } 00:28:26.185 } 00:28:26.185 ]' 00:28:26.185 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:26.185 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:26.185 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:26.185 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:26.185 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:26.185 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:26.185 00:12:32 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:26.185 00:12:32 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:26.447 00:12:32 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:26.447 00:12:32 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:26.447 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:26.447 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:26.447 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:26.447 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:26.447 00:12:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8517cfbd-16a7-41e6-a01b-314be4d9da16 00:28:26.447 00:12:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:26.447 { 00:28:26.447 "name": "8517cfbd-16a7-41e6-a01b-314be4d9da16", 00:28:26.447 "aliases": [ 00:28:26.447 "lvs/nvme0n1p0" 00:28:26.447 ], 00:28:26.447 "product_name": "Logical Volume", 00:28:26.447 "block_size": 4096, 00:28:26.447 "num_blocks": 26476544, 00:28:26.447 "uuid": "8517cfbd-16a7-41e6-a01b-314be4d9da16", 00:28:26.447 "assigned_rate_limits": { 00:28:26.447 "rw_ios_per_sec": 0, 00:28:26.447 "rw_mbytes_per_sec": 0, 00:28:26.447 "r_mbytes_per_sec": 0, 00:28:26.447 "w_mbytes_per_sec": 0 00:28:26.447 }, 00:28:26.447 "claimed": false, 00:28:26.447 "zoned": false, 00:28:26.447 "supported_io_types": { 00:28:26.447 "read": true, 00:28:26.447 "write": true, 00:28:26.447 "unmap": true, 00:28:26.447 "flush": false, 00:28:26.447 "reset": true, 00:28:26.447 "nvme_admin": false, 00:28:26.447 "nvme_io": false, 00:28:26.447 "nvme_io_md": false, 00:28:26.447 "write_zeroes": true, 00:28:26.447 "zcopy": false, 00:28:26.447 "get_zone_info": false, 00:28:26.447 "zone_management": false, 00:28:26.447 "zone_append": false, 00:28:26.447 "compare": false, 00:28:26.447 "compare_and_write": false, 00:28:26.447 "abort": false, 00:28:26.447 "seek_hole": true, 00:28:26.447 "seek_data": true, 00:28:26.447 "copy": false, 00:28:26.447 "nvme_iov_md": false 00:28:26.447 }, 00:28:26.447 "driver_specific": { 00:28:26.447 "lvol": { 00:28:26.447 "lvol_store_uuid": "d723bf6c-6a2e-46e7-82f0-82d268e1ba1e", 00:28:26.447 "base_bdev": "nvme0n1", 00:28:26.447 "thin_provision": true, 00:28:26.447 "num_allocated_clusters": 0, 00:28:26.447 "snapshot": false, 00:28:26.447 "clone": false, 00:28:26.447 "esnap_clone": false 00:28:26.447 } 00:28:26.447 } 00:28:26.447 } 00:28:26.447 ]' 00:28:26.447 00:12:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8517cfbd-16a7-41e6-a01b-314be4d9da16 --l2p_dram_limit 10' 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:26.713 00:12:33 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8517cfbd-16a7-41e6-a01b-314be4d9da16 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:26.713 [2024-11-19 00:12:33.372565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.713 [2024-11-19 00:12:33.372617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:26.713 [2024-11-19 00:12:33.372631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:26.714 [2024-11-19 00:12:33.372639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.372692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.372701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:26.714 [2024-11-19 00:12:33.372710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:26.714 [2024-11-19 00:12:33.372716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.372737] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:26.714 [2024-11-19 00:12:33.373380] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:26.714 [2024-11-19 00:12:33.373408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.373415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:26.714 [2024-11-19 00:12:33.373424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:28:26.714 [2024-11-19 00:12:33.373431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.373494] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5f4f8e29-ab62-4bf4-a4e2-751020ad819a 00:28:26.714 [2024-11-19 00:12:33.374830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.374878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:26.714 [2024-11-19 00:12:33.374886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:26.714 [2024-11-19 00:12:33.374896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.381356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.381395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:26.714 [2024-11-19 00:12:33.381406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.425 ms 00:28:26.714 [2024-11-19 00:12:33.381414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.381491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.381501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:26.714 [2024-11-19 00:12:33.381508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:26.714 [2024-11-19 00:12:33.381518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.381564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.381574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:26.714 [2024-11-19 00:12:33.381581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:26.714 [2024-11-19 00:12:33.381591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.381608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:26.714 [2024-11-19 00:12:33.384880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.384915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:26.714 [2024-11-19 00:12:33.384925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:28:26.714 [2024-11-19 00:12:33.384931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.384961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.384968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:26.714 [2024-11-19 00:12:33.384977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:26.714 [2024-11-19 00:12:33.384983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.384997] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:26.714 [2024-11-19 00:12:33.385116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:26.714 [2024-11-19 00:12:33.385141] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:26.714 [2024-11-19 00:12:33.385151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:26.714 [2024-11-19 00:12:33.385161] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:26.714 [2024-11-19 00:12:33.385169] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:26.714 [2024-11-19 00:12:33.385178] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:26.714 [2024-11-19 00:12:33.385184] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:26.714 [2024-11-19 00:12:33.385194] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:26.714 [2024-11-19 00:12:33.385200] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:26.714 [2024-11-19 00:12:33.385208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.385214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:26.714 [2024-11-19 00:12:33.385222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:28:26.714 [2024-11-19 00:12:33.385234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.385299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.714 [2024-11-19 00:12:33.385306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:26.714 [2024-11-19 00:12:33.385314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:26.714 [2024-11-19 00:12:33.385319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.714 [2024-11-19 00:12:33.385402] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:26.714 [2024-11-19 00:12:33.385411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:26.714 [2024-11-19 00:12:33.385419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:26.714 [2024-11-19 00:12:33.385425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:26.714 [2024-11-19 00:12:33.385438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:26.714 [2024-11-19 00:12:33.385450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:26.714 [2024-11-19 00:12:33.385458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:26.714 [2024-11-19 00:12:33.385469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:26.714 [2024-11-19 00:12:33.385474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:26.714 [2024-11-19 00:12:33.385481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:26.714 [2024-11-19 00:12:33.385486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:26.714 [2024-11-19 00:12:33.385493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:26.714 [2024-11-19 00:12:33.385498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:26.714 [2024-11-19 00:12:33.385513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:26.714 [2024-11-19 00:12:33.385521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:26.714 [2024-11-19 00:12:33.385535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:26.714 [2024-11-19 00:12:33.385547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:26.714 [2024-11-19 00:12:33.385553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:26.714 [2024-11-19 00:12:33.385564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:26.714 [2024-11-19 00:12:33.385570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:26.714 [2024-11-19 00:12:33.385582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:26.714 [2024-11-19 00:12:33.385588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:26.714 [2024-11-19 00:12:33.385599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:26.714 [2024-11-19 00:12:33.385607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:26.714 [2024-11-19 00:12:33.385618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:26.714 [2024-11-19 00:12:33.385623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:26.714 [2024-11-19 00:12:33.385630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:26.714 [2024-11-19 00:12:33.385635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:26.714 [2024-11-19 00:12:33.385641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:26.714 [2024-11-19 00:12:33.385647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.714 [2024-11-19 00:12:33.385654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:26.714 [2024-11-19 00:12:33.385659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:26.715 [2024-11-19 00:12:33.385665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.715 [2024-11-19 00:12:33.385670] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:26.715 [2024-11-19 00:12:33.385678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:26.715 [2024-11-19 00:12:33.385684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:26.715 [2024-11-19 00:12:33.385691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.715 [2024-11-19 00:12:33.385697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:26.715 [2024-11-19 00:12:33.385705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:26.715 [2024-11-19 00:12:33.385709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:26.715 [2024-11-19 00:12:33.385718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:26.715 [2024-11-19 00:12:33.385723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:26.715 [2024-11-19 00:12:33.385730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:26.715 [2024-11-19 00:12:33.385738] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:26.715 [2024-11-19 00:12:33.385747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:26.715 [2024-11-19 00:12:33.385756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:26.715 [2024-11-19 00:12:33.385764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:26.715 [2024-11-19 00:12:33.385769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:26.715 [2024-11-19 00:12:33.385776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:26.715 [2024-11-19 00:12:33.385782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:26.715 [2024-11-19 00:12:33.385789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:26.715 [2024-11-19 00:12:33.385795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:26.715 [2024-11-19 00:12:33.385802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:26.715 [2024-11-19 00:12:33.385807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:26.715 [2024-11-19 00:12:33.385815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:26.715 [2024-11-19 00:12:33.385820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:26.715 [2024-11-19 00:12:33.385829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:26.715 [2024-11-19 00:12:33.385834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:26.715 [2024-11-19 00:12:33.385842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:26.715 [2024-11-19 00:12:33.385848] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:26.715 [2024-11-19 00:12:33.385856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:26.715 [2024-11-19 00:12:33.385862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:26.715 [2024-11-19 00:12:33.385869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:26.715 [2024-11-19 00:12:33.385874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:26.715 [2024-11-19 00:12:33.385882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:26.715 [2024-11-19 00:12:33.385888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.715 [2024-11-19 00:12:33.385896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:26.715 [2024-11-19 00:12:33.385902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:28:26.715 [2024-11-19 00:12:33.385909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.715 [2024-11-19 00:12:33.385952] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:26.715 [2024-11-19 00:12:33.385964] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:30.995 [2024-11-19 00:12:37.412043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.995 [2024-11-19 00:12:37.412153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:30.996 [2024-11-19 00:12:37.412173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4026.073 ms 00:28:30.996 [2024-11-19 00:12:37.412186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.444459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.444525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:30.996 [2024-11-19 00:12:37.444540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.015 ms 00:28:30.996 [2024-11-19 00:12:37.444552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.444698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.444712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:30.996 [2024-11-19 00:12:37.444722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:30.996 [2024-11-19 00:12:37.444736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.480738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.480795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:30.996 [2024-11-19 00:12:37.480807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.963 ms 00:28:30.996 [2024-11-19 00:12:37.480818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.480855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.480872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:30.996 [2024-11-19 00:12:37.480881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:30.996 [2024-11-19 00:12:37.480892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.481536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.481578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:30.996 [2024-11-19 00:12:37.481589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:28:30.996 [2024-11-19 00:12:37.481600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.481720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.481733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:30.996 [2024-11-19 00:12:37.481746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:28:30.996 [2024-11-19 00:12:37.481759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.499962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.500023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:30.996 [2024-11-19 00:12:37.500034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.183 ms 00:28:30.996 [2024-11-19 00:12:37.500046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.514055] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:30.996 [2024-11-19 00:12:37.518308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.518358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:30.996 [2024-11-19 00:12:37.518372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:28:30.996 [2024-11-19 00:12:37.518381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.634201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.634274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:30.996 [2024-11-19 00:12:37.634294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.778 ms 00:28:30.996 [2024-11-19 00:12:37.634304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.634522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.634539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:30.996 [2024-11-19 00:12:37.634554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:28:30.996 [2024-11-19 00:12:37.634563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.996 [2024-11-19 00:12:37.661358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.996 [2024-11-19 00:12:37.661411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:30.996 [2024-11-19 00:12:37.661428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.731 ms 00:28:30.996 [2024-11-19 00:12:37.661437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.258 [2024-11-19 00:12:37.687579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.258 [2024-11-19 00:12:37.687630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:31.258 [2024-11-19 00:12:37.687647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.077 ms 00:28:31.258 [2024-11-19 00:12:37.687655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.258 [2024-11-19 00:12:37.688316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.258 [2024-11-19 00:12:37.688354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:31.258 [2024-11-19 00:12:37.688366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:28:31.258 [2024-11-19 00:12:37.688374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.258 [2024-11-19 00:12:37.778360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.258 [2024-11-19 00:12:37.778417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:31.258 [2024-11-19 00:12:37.778438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.916 ms 00:28:31.258 [2024-11-19 00:12:37.778448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.258 [2024-11-19 00:12:37.806029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.258 [2024-11-19 00:12:37.806086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:31.258 [2024-11-19 00:12:37.806113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.488 ms 00:28:31.258 [2024-11-19 00:12:37.806131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.258 [2024-11-19 00:12:37.832172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.258 [2024-11-19 00:12:37.832223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:31.258 [2024-11-19 00:12:37.832238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.983 ms 00:28:31.258 [2024-11-19 00:12:37.832246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.258 [2024-11-19 00:12:37.858852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.258 [2024-11-19 00:12:37.858910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:31.258 [2024-11-19 00:12:37.858928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.550 ms 00:28:31.258 [2024-11-19 00:12:37.858936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.258 [2024-11-19 00:12:37.858997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.258 [2024-11-19 00:12:37.859008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:31.258 [2024-11-19 00:12:37.859023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:31.258 [2024-11-19 00:12:37.859032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.258 [2024-11-19 00:12:37.859163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.258 [2024-11-19 00:12:37.859177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:31.258 [2024-11-19 00:12:37.859192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:31.258 [2024-11-19 00:12:37.859200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.258 [2024-11-19 00:12:37.860609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4487.498 ms, result 0 00:28:31.258 { 00:28:31.258 "name": "ftl0", 00:28:31.258 "uuid": "5f4f8e29-ab62-4bf4-a4e2-751020ad819a" 00:28:31.258 } 00:28:31.258 00:12:37 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:31.258 00:12:37 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:31.520 00:12:38 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:31.520 00:12:38 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:31.780 [2024-11-19 00:12:38.299695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.299764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:31.780 [2024-11-19 00:12:38.299778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:31.780 [2024-11-19 00:12:38.299796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.299821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:31.780 [2024-11-19 00:12:38.302970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.303018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:31.780 [2024-11-19 00:12:38.303033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:28:31.780 [2024-11-19 00:12:38.303041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.303327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.303339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:31.780 [2024-11-19 00:12:38.303356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:28:31.780 [2024-11-19 00:12:38.303364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.306619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.306647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:31.780 [2024-11-19 00:12:38.306659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:28:31.780 [2024-11-19 00:12:38.306667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.312824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.312864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:31.780 [2024-11-19 00:12:38.312883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.133 ms 00:28:31.780 [2024-11-19 00:12:38.312892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.339499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.339554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:31.780 [2024-11-19 00:12:38.339571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.502 ms 00:28:31.780 [2024-11-19 00:12:38.339579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.358350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.358401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:31.780 [2024-11-19 00:12:38.358417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.708 ms 00:28:31.780 [2024-11-19 00:12:38.358425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.358605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.358618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:31.780 [2024-11-19 00:12:38.358630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:28:31.780 [2024-11-19 00:12:38.358638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.385153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.385207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:31.780 [2024-11-19 00:12:38.385222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.491 ms 00:28:31.780 [2024-11-19 00:12:38.385230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.411325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.411378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:31.780 [2024-11-19 00:12:38.411393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.038 ms 00:28:31.780 [2024-11-19 00:12:38.411401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.437485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.437538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:31.780 [2024-11-19 00:12:38.437553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.024 ms 00:28:31.780 [2024-11-19 00:12:38.437561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.462914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.780 [2024-11-19 00:12:38.462965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:31.780 [2024-11-19 00:12:38.462981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.232 ms 00:28:31.780 [2024-11-19 00:12:38.462988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.780 [2024-11-19 00:12:38.463040] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:31.780 [2024-11-19 00:12:38.463056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:31.780 [2024-11-19 00:12:38.463265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:31.781 [2024-11-19 00:12:38.463983] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:31.781 [2024-11-19 00:12:38.463998] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f4f8e29-ab62-4bf4-a4e2-751020ad819a 00:28:31.781 [2024-11-19 00:12:38.464007] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:31.781 [2024-11-19 00:12:38.464018] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:31.781 [2024-11-19 00:12:38.464025] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:31.781 [2024-11-19 00:12:38.464039] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:31.781 [2024-11-19 00:12:38.464046] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:31.781 [2024-11-19 00:12:38.464056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:31.781 [2024-11-19 00:12:38.464065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:31.781 [2024-11-19 00:12:38.464073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:31.781 [2024-11-19 00:12:38.464080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:31.781 [2024-11-19 00:12:38.464090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.781 [2024-11-19 00:12:38.464097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:31.781 [2024-11-19 00:12:38.464108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:28:31.781 [2024-11-19 00:12:38.464115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.041 [2024-11-19 00:12:38.478205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.041 [2024-11-19 00:12:38.478254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:32.042 [2024-11-19 00:12:38.478269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.012 ms 00:28:32.042 [2024-11-19 00:12:38.478276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.478675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.042 [2024-11-19 00:12:38.478699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:32.042 [2024-11-19 00:12:38.478710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:28:32.042 [2024-11-19 00:12:38.478721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.525541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.525595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.042 [2024-11-19 00:12:38.525609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.525617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.525688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.525697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.042 [2024-11-19 00:12:38.525707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.525718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.525829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.525841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.042 [2024-11-19 00:12:38.525852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.525859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.525883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.525891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.042 [2024-11-19 00:12:38.525901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.525908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.612103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.612185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.042 [2024-11-19 00:12:38.612203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.612212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.682178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.682231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.042 [2024-11-19 00:12:38.682246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.682259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.682362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.682373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.042 [2024-11-19 00:12:38.682384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.682392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.682464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.682474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.042 [2024-11-19 00:12:38.682485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.682493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.682600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.682611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.042 [2024-11-19 00:12:38.682622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.682630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.682666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.682677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:32.042 [2024-11-19 00:12:38.682688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.682696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.682742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.682754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.042 [2024-11-19 00:12:38.682764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.682773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.682828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.042 [2024-11-19 00:12:38.682850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.042 [2024-11-19 00:12:38.682861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.042 [2024-11-19 00:12:38.682869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.042 [2024-11-19 00:12:38.683024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 383.282 ms, result 0 00:28:32.042 true 00:28:32.042 00:12:38 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81236 00:28:32.042 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81236 ']' 00:28:32.042 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81236 00:28:32.042 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:28:32.042 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.042 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81236 00:28:32.303 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:32.303 killing process with pid 81236 00:28:32.303 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:32.303 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81236' 00:28:32.303 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 81236 00:28:32.303 00:12:38 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 81236 00:28:38.897 00:12:44 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:42.199 262144+0 records in 00:28:42.199 262144+0 records out 00:28:42.199 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.13682 s, 260 MB/s 00:28:42.199 00:12:48 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:44.737 00:12:50 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:44.737 [2024-11-19 00:12:50.918155] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:44.737 [2024-11-19 00:12:50.918263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81469 ] 00:28:44.737 [2024-11-19 00:12:51.071382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.737 [2024-11-19 00:12:51.171068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.999 [2024-11-19 00:12:51.435740] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:44.999 [2024-11-19 00:12:51.435820] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:44.999 [2024-11-19 00:12:51.596943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.999 [2024-11-19 00:12:51.597010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:44.999 [2024-11-19 00:12:51.597031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:44.999 [2024-11-19 00:12:51.597039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.999 [2024-11-19 00:12:51.597092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.999 [2024-11-19 00:12:51.597104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:44.999 [2024-11-19 00:12:51.597115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:44.999 [2024-11-19 00:12:51.597139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.999 [2024-11-19 00:12:51.597161] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:44.999 [2024-11-19 00:12:51.598065] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:44.999 [2024-11-19 00:12:51.598109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.999 [2024-11-19 00:12:51.598118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:44.999 [2024-11-19 00:12:51.598145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:28:44.999 [2024-11-19 00:12:51.598154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.999 [2024-11-19 00:12:51.599904] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:44.999 [2024-11-19 00:12:51.614236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.999 [2024-11-19 00:12:51.614286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:44.999 [2024-11-19 00:12:51.614300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.333 ms 00:28:44.999 [2024-11-19 00:12:51.614309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.999 [2024-11-19 00:12:51.614388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.999 [2024-11-19 00:12:51.614398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:44.999 [2024-11-19 00:12:51.614407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:44.999 [2024-11-19 00:12:51.614415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.999 [2024-11-19 00:12:51.622409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.999 [2024-11-19 00:12:51.622451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:44.999 [2024-11-19 00:12:51.622461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.919 ms 00:28:45.000 [2024-11-19 00:12:51.622469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.000 [2024-11-19 00:12:51.622551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.000 [2024-11-19 00:12:51.622561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:45.000 [2024-11-19 00:12:51.622569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:45.000 [2024-11-19 00:12:51.622578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.000 [2024-11-19 00:12:51.622621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.000 [2024-11-19 00:12:51.622632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:45.000 [2024-11-19 00:12:51.622641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:45.000 [2024-11-19 00:12:51.622648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.000 [2024-11-19 00:12:51.622672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:45.000 [2024-11-19 00:12:51.626681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.000 [2024-11-19 00:12:51.626720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:45.000 [2024-11-19 00:12:51.626730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.015 ms 00:28:45.000 [2024-11-19 00:12:51.626741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.000 [2024-11-19 00:12:51.626774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.000 [2024-11-19 00:12:51.626782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:45.000 [2024-11-19 00:12:51.626791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:45.000 [2024-11-19 00:12:51.626798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.000 [2024-11-19 00:12:51.626850] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:45.000 [2024-11-19 00:12:51.626872] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:45.000 [2024-11-19 00:12:51.626910] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:45.000 [2024-11-19 00:12:51.626928] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:45.000 [2024-11-19 00:12:51.627034] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:45.000 [2024-11-19 00:12:51.627045] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:45.000 [2024-11-19 00:12:51.627056] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:45.000 [2024-11-19 00:12:51.627066] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627075] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627084] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:45.000 [2024-11-19 00:12:51.627091] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:45.000 [2024-11-19 00:12:51.627099] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:45.000 [2024-11-19 00:12:51.627107] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:45.000 [2024-11-19 00:12:51.627118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.000 [2024-11-19 00:12:51.627142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:45.000 [2024-11-19 00:12:51.627150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:28:45.000 [2024-11-19 00:12:51.627158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.000 [2024-11-19 00:12:51.627242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.000 [2024-11-19 00:12:51.627251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:45.000 [2024-11-19 00:12:51.627260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:45.000 [2024-11-19 00:12:51.627267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.000 [2024-11-19 00:12:51.627371] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:45.000 [2024-11-19 00:12:51.627392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:45.000 [2024-11-19 00:12:51.627400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:45.000 [2024-11-19 00:12:51.627424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:45.000 [2024-11-19 00:12:51.627444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:45.000 [2024-11-19 00:12:51.627458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:45.000 [2024-11-19 00:12:51.627464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:45.000 [2024-11-19 00:12:51.627470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:45.000 [2024-11-19 00:12:51.627479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:45.000 [2024-11-19 00:12:51.627487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:45.000 [2024-11-19 00:12:51.627499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:45.000 [2024-11-19 00:12:51.627513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:45.000 [2024-11-19 00:12:51.627534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:45.000 [2024-11-19 00:12:51.627553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:45.000 [2024-11-19 00:12:51.627572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:45.000 [2024-11-19 00:12:51.627592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:45.000 [2024-11-19 00:12:51.627612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:45.000 [2024-11-19 00:12:51.627624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:45.000 [2024-11-19 00:12:51.627631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:45.000 [2024-11-19 00:12:51.627638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:45.000 [2024-11-19 00:12:51.627645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:45.000 [2024-11-19 00:12:51.627652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:45.000 [2024-11-19 00:12:51.627658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:45.000 [2024-11-19 00:12:51.627671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:45.000 [2024-11-19 00:12:51.627677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627683] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:45.000 [2024-11-19 00:12:51.627691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:45.000 [2024-11-19 00:12:51.627701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.000 [2024-11-19 00:12:51.627717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:45.000 [2024-11-19 00:12:51.627724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:45.000 [2024-11-19 00:12:51.627730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:45.000 [2024-11-19 00:12:51.627737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:45.000 [2024-11-19 00:12:51.627744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:45.000 [2024-11-19 00:12:51.627751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:45.000 [2024-11-19 00:12:51.627759] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:45.000 [2024-11-19 00:12:51.627768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.000 [2024-11-19 00:12:51.627777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:45.000 [2024-11-19 00:12:51.627784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:45.000 [2024-11-19 00:12:51.627791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:45.000 [2024-11-19 00:12:51.627798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:45.000 [2024-11-19 00:12:51.627805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:45.000 [2024-11-19 00:12:51.627812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:45.000 [2024-11-19 00:12:51.627820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:45.001 [2024-11-19 00:12:51.627827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:45.001 [2024-11-19 00:12:51.627833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:45.001 [2024-11-19 00:12:51.627841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:45.001 [2024-11-19 00:12:51.627848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:45.001 [2024-11-19 00:12:51.627854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:45.001 [2024-11-19 00:12:51.627863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:45.001 [2024-11-19 00:12:51.627870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:45.001 [2024-11-19 00:12:51.627877] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:45.001 [2024-11-19 00:12:51.627887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.001 [2024-11-19 00:12:51.627895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:45.001 [2024-11-19 00:12:51.627902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:45.001 [2024-11-19 00:12:51.627909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:45.001 [2024-11-19 00:12:51.627916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:45.001 [2024-11-19 00:12:51.627924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.001 [2024-11-19 00:12:51.627931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:45.001 [2024-11-19 00:12:51.627940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:28:45.001 [2024-11-19 00:12:51.627948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.001 [2024-11-19 00:12:51.659653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.001 [2024-11-19 00:12:51.659872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:45.001 [2024-11-19 00:12:51.659891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.660 ms 00:28:45.001 [2024-11-19 00:12:51.659901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.001 [2024-11-19 00:12:51.659997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.001 [2024-11-19 00:12:51.660007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:45.001 [2024-11-19 00:12:51.660016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:45.001 [2024-11-19 00:12:51.660023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.262 [2024-11-19 00:12:51.707759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.262 [2024-11-19 00:12:51.707961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:45.262 [2024-11-19 00:12:51.707983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.678 ms 00:28:45.262 [2024-11-19 00:12:51.707992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.262 [2024-11-19 00:12:51.708042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.262 [2024-11-19 00:12:51.708054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:45.262 [2024-11-19 00:12:51.708064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:45.263 [2024-11-19 00:12:51.708077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.708695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.708719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:45.263 [2024-11-19 00:12:51.708730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:28:45.263 [2024-11-19 00:12:51.708738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.708895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.708907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:45.263 [2024-11-19 00:12:51.708915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:28:45.263 [2024-11-19 00:12:51.708930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.724579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.724624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:45.263 [2024-11-19 00:12:51.724638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.629 ms 00:28:45.263 [2024-11-19 00:12:51.724647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.739128] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:45.263 [2024-11-19 00:12:51.739177] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:45.263 [2024-11-19 00:12:51.739191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.739200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:45.263 [2024-11-19 00:12:51.739210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.438 ms 00:28:45.263 [2024-11-19 00:12:51.739218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.765034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.765085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:45.263 [2024-11-19 00:12:51.765105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.757 ms 00:28:45.263 [2024-11-19 00:12:51.765113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.778247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.778438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:45.263 [2024-11-19 00:12:51.778458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.065 ms 00:28:45.263 [2024-11-19 00:12:51.778466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.790811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.790857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:45.263 [2024-11-19 00:12:51.790869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.307 ms 00:28:45.263 [2024-11-19 00:12:51.790876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.791545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.791570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:45.263 [2024-11-19 00:12:51.791580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:28:45.263 [2024-11-19 00:12:51.791588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.856470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.856535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:45.263 [2024-11-19 00:12:51.856551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.860 ms 00:28:45.263 [2024-11-19 00:12:51.856567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.867608] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:45.263 [2024-11-19 00:12:51.870628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.870672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:45.263 [2024-11-19 00:12:51.870686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.004 ms 00:28:45.263 [2024-11-19 00:12:51.870694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.870779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.870790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:45.263 [2024-11-19 00:12:51.870800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:45.263 [2024-11-19 00:12:51.870808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.870881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.870892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:45.263 [2024-11-19 00:12:51.870900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:45.263 [2024-11-19 00:12:51.870908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.870929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.870938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:45.263 [2024-11-19 00:12:51.870946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:45.263 [2024-11-19 00:12:51.870955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.870989] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:45.263 [2024-11-19 00:12:51.871001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.871013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:45.263 [2024-11-19 00:12:51.871021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:45.263 [2024-11-19 00:12:51.871029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.896624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.896673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:45.263 [2024-11-19 00:12:51.896687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.576 ms 00:28:45.263 [2024-11-19 00:12:51.896695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.896787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.263 [2024-11-19 00:12:51.896797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:45.263 [2024-11-19 00:12:51.896806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:45.263 [2024-11-19 00:12:51.896815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.263 [2024-11-19 00:12:51.898378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.897 ms, result 0 00:28:46.642  [2024-11-19T00:12:54.273Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-19T00:12:55.216Z] Copying: 40/1024 [MB] (27 MBps) [2024-11-19T00:12:56.159Z] Copying: 66/1024 [MB] (26 MBps) [2024-11-19T00:12:57.102Z] Copying: 77/1024 [MB] (10 MBps) [2024-11-19T00:12:58.045Z] Copying: 88/1024 [MB] (10 MBps) [2024-11-19T00:12:58.989Z] Copying: 99/1024 [MB] (10 MBps) [2024-11-19T00:12:59.935Z] Copying: 109964/1048576 [kB] (8560 kBps) [2024-11-19T00:13:01.321Z] Copying: 118904/1048576 [kB] (8940 kBps) [2024-11-19T00:13:02.264Z] Copying: 137/1024 [MB] (21 MBps) [2024-11-19T00:13:03.208Z] Copying: 166/1024 [MB] (29 MBps) [2024-11-19T00:13:04.153Z] Copying: 187/1024 [MB] (20 MBps) [2024-11-19T00:13:05.096Z] Copying: 204/1024 [MB] (17 MBps) [2024-11-19T00:13:06.040Z] Copying: 216/1024 [MB] (11 MBps) [2024-11-19T00:13:06.985Z] Copying: 229/1024 [MB] (12 MBps) [2024-11-19T00:13:07.929Z] Copying: 240/1024 [MB] (11 MBps) [2024-11-19T00:13:09.316Z] Copying: 254/1024 [MB] (14 MBps) [2024-11-19T00:13:10.259Z] Copying: 265/1024 [MB] (10 MBps) [2024-11-19T00:13:11.203Z] Copying: 275/1024 [MB] (10 MBps) [2024-11-19T00:13:12.230Z] Copying: 291/1024 [MB] (16 MBps) [2024-11-19T00:13:13.174Z] Copying: 311/1024 [MB] (19 MBps) [2024-11-19T00:13:14.118Z] Copying: 340/1024 [MB] (28 MBps) [2024-11-19T00:13:15.061Z] Copying: 355/1024 [MB] (15 MBps) [2024-11-19T00:13:16.007Z] Copying: 377/1024 [MB] (21 MBps) [2024-11-19T00:13:16.950Z] Copying: 396/1024 [MB] (19 MBps) [2024-11-19T00:13:18.338Z] Copying: 414/1024 [MB] (17 MBps) [2024-11-19T00:13:19.284Z] Copying: 429/1024 [MB] (14 MBps) [2024-11-19T00:13:20.226Z] Copying: 445/1024 [MB] (16 MBps) [2024-11-19T00:13:21.169Z] Copying: 457/1024 [MB] (12 MBps) [2024-11-19T00:13:22.111Z] Copying: 467/1024 [MB] (10 MBps) [2024-11-19T00:13:23.055Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-19T00:13:24.000Z] Copying: 488/1024 [MB] (10 MBps) [2024-11-19T00:13:24.943Z] Copying: 502/1024 [MB] (14 MBps) [2024-11-19T00:13:26.330Z] Copying: 515/1024 [MB] (12 MBps) [2024-11-19T00:13:27.275Z] Copying: 529/1024 [MB] (14 MBps) [2024-11-19T00:13:28.217Z] Copying: 545/1024 [MB] (16 MBps) [2024-11-19T00:13:29.162Z] Copying: 561/1024 [MB] (15 MBps) [2024-11-19T00:13:30.126Z] Copying: 575/1024 [MB] (14 MBps) [2024-11-19T00:13:31.071Z] Copying: 590/1024 [MB] (14 MBps) [2024-11-19T00:13:32.016Z] Copying: 605/1024 [MB] (14 MBps) [2024-11-19T00:13:32.959Z] Copying: 616/1024 [MB] (11 MBps) [2024-11-19T00:13:34.344Z] Copying: 627/1024 [MB] (10 MBps) [2024-11-19T00:13:34.917Z] Copying: 647/1024 [MB] (20 MBps) [2024-11-19T00:13:36.305Z] Copying: 659/1024 [MB] (12 MBps) [2024-11-19T00:13:37.248Z] Copying: 671/1024 [MB] (11 MBps) [2024-11-19T00:13:38.192Z] Copying: 687/1024 [MB] (15 MBps) [2024-11-19T00:13:39.137Z] Copying: 701/1024 [MB] (14 MBps) [2024-11-19T00:13:40.086Z] Copying: 712/1024 [MB] (11 MBps) [2024-11-19T00:13:41.030Z] Copying: 730/1024 [MB] (17 MBps) [2024-11-19T00:13:41.974Z] Copying: 747/1024 [MB] (17 MBps) [2024-11-19T00:13:42.919Z] Copying: 759/1024 [MB] (11 MBps) [2024-11-19T00:13:43.931Z] Copying: 779/1024 [MB] (19 MBps) [2024-11-19T00:13:45.316Z] Copying: 790/1024 [MB] (11 MBps) [2024-11-19T00:13:46.258Z] Copying: 800/1024 [MB] (10 MBps) [2024-11-19T00:13:47.202Z] Copying: 815/1024 [MB] (14 MBps) [2024-11-19T00:13:48.145Z] Copying: 831/1024 [MB] (16 MBps) [2024-11-19T00:13:49.087Z] Copying: 848/1024 [MB] (16 MBps) [2024-11-19T00:13:50.030Z] Copying: 860/1024 [MB] (12 MBps) [2024-11-19T00:13:50.972Z] Copying: 872/1024 [MB] (11 MBps) [2024-11-19T00:13:51.917Z] Copying: 889/1024 [MB] (16 MBps) [2024-11-19T00:13:53.304Z] Copying: 905/1024 [MB] (15 MBps) [2024-11-19T00:13:54.247Z] Copying: 920/1024 [MB] (14 MBps) [2024-11-19T00:13:55.190Z] Copying: 936/1024 [MB] (16 MBps) [2024-11-19T00:13:56.133Z] Copying: 951/1024 [MB] (14 MBps) [2024-11-19T00:13:57.076Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-19T00:13:58.017Z] Copying: 975/1024 [MB] (13 MBps) [2024-11-19T00:13:58.958Z] Copying: 988/1024 [MB] (13 MBps) [2024-11-19T00:14:00.339Z] Copying: 1005/1024 [MB] (16 MBps) [2024-11-19T00:14:00.340Z] Copying: 1021/1024 [MB] (15 MBps) [2024-11-19T00:14:00.340Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-19 00:14:00.178562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.648 [2024-11-19 00:14:00.178629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:53.648 [2024-11-19 00:14:00.178646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:53.648 [2024-11-19 00:14:00.178656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.178678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:53.648 [2024-11-19 00:14:00.181741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.648 [2024-11-19 00:14:00.181784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:53.648 [2024-11-19 00:14:00.181796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:29:53.648 [2024-11-19 00:14:00.181806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.184800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.648 [2024-11-19 00:14:00.184987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:53.648 [2024-11-19 00:14:00.185009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.956 ms 00:29:53.648 [2024-11-19 00:14:00.185018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.185053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.648 [2024-11-19 00:14:00.185062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:53.648 [2024-11-19 00:14:00.185071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:53.648 [2024-11-19 00:14:00.185078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.185159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.648 [2024-11-19 00:14:00.185172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:53.648 [2024-11-19 00:14:00.185181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:29:53.648 [2024-11-19 00:14:00.185189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.185204] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:53.648 [2024-11-19 00:14:00.185217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.185997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:53.648 [2024-11-19 00:14:00.186176] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:53.648 [2024-11-19 00:14:00.186184] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f4f8e29-ab62-4bf4-a4e2-751020ad819a 00:29:53.648 [2024-11-19 00:14:00.186192] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:53.648 [2024-11-19 00:14:00.186199] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:53.648 [2024-11-19 00:14:00.186207] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:53.648 [2024-11-19 00:14:00.186215] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:53.648 [2024-11-19 00:14:00.186226] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:53.648 [2024-11-19 00:14:00.186235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:53.648 [2024-11-19 00:14:00.186243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:53.648 [2024-11-19 00:14:00.186260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:53.648 [2024-11-19 00:14:00.186267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:53.648 [2024-11-19 00:14:00.186275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.648 [2024-11-19 00:14:00.186283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:53.648 [2024-11-19 00:14:00.186292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:29:53.648 [2024-11-19 00:14:00.186300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.200010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.648 [2024-11-19 00:14:00.200214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:53.648 [2024-11-19 00:14:00.200243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.693 ms 00:29:53.648 [2024-11-19 00:14:00.200251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.200638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.648 [2024-11-19 00:14:00.200654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:53.648 [2024-11-19 00:14:00.200662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:29:53.648 [2024-11-19 00:14:00.200670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.237335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.648 [2024-11-19 00:14:00.237390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:53.648 [2024-11-19 00:14:00.237402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.648 [2024-11-19 00:14:00.237411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.237481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.648 [2024-11-19 00:14:00.237490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:53.648 [2024-11-19 00:14:00.237500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.648 [2024-11-19 00:14:00.237509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.648 [2024-11-19 00:14:00.237583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.649 [2024-11-19 00:14:00.237595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:53.649 [2024-11-19 00:14:00.237610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.649 [2024-11-19 00:14:00.237618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.649 [2024-11-19 00:14:00.237635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.649 [2024-11-19 00:14:00.237644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:53.649 [2024-11-19 00:14:00.237653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.649 [2024-11-19 00:14:00.237666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.649 [2024-11-19 00:14:00.320939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.649 [2024-11-19 00:14:00.320997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:53.649 [2024-11-19 00:14:00.321017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.649 [2024-11-19 00:14:00.321025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.908 [2024-11-19 00:14:00.390858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.908 [2024-11-19 00:14:00.390916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:53.908 [2024-11-19 00:14:00.390931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.908 [2024-11-19 00:14:00.390940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.908 [2024-11-19 00:14:00.391002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.908 [2024-11-19 00:14:00.391012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:53.908 [2024-11-19 00:14:00.391021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.908 [2024-11-19 00:14:00.391037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.908 [2024-11-19 00:14:00.391096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.908 [2024-11-19 00:14:00.391107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:53.908 [2024-11-19 00:14:00.391116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.908 [2024-11-19 00:14:00.391157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.908 [2024-11-19 00:14:00.391239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.908 [2024-11-19 00:14:00.391249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:53.908 [2024-11-19 00:14:00.391273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.908 [2024-11-19 00:14:00.391281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.908 [2024-11-19 00:14:00.391321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.908 [2024-11-19 00:14:00.391330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:53.908 [2024-11-19 00:14:00.391339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.908 [2024-11-19 00:14:00.391348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.908 [2024-11-19 00:14:00.391390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.908 [2024-11-19 00:14:00.391400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:53.908 [2024-11-19 00:14:00.391409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.908 [2024-11-19 00:14:00.391417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.908 [2024-11-19 00:14:00.391468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.909 [2024-11-19 00:14:00.391479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:53.909 [2024-11-19 00:14:00.391487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.909 [2024-11-19 00:14:00.391496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.909 [2024-11-19 00:14:00.391637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 213.056 ms, result 0 00:29:54.477 00:29:54.477 00:29:54.739 00:14:01 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:54.739 [2024-11-19 00:14:01.263963] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:54.739 [2024-11-19 00:14:01.265804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82176 ] 00:29:55.000 [2024-11-19 00:14:01.442374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.000 [2024-11-19 00:14:01.562635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.259 [2024-11-19 00:14:01.854765] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:55.259 [2024-11-19 00:14:01.854845] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:55.521 [2024-11-19 00:14:02.016989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.017054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:55.521 [2024-11-19 00:14:02.017075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:55.521 [2024-11-19 00:14:02.017084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.017162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.017174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:55.521 [2024-11-19 00:14:02.017187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:55.521 [2024-11-19 00:14:02.017195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.017217] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:55.521 [2024-11-19 00:14:02.018035] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:55.521 [2024-11-19 00:14:02.018089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.018098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:55.521 [2024-11-19 00:14:02.018108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:29:55.521 [2024-11-19 00:14:02.018116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.018460] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:55.521 [2024-11-19 00:14:02.018488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.018497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:55.521 [2024-11-19 00:14:02.018510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:55.521 [2024-11-19 00:14:02.018517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.018585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.018595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:55.521 [2024-11-19 00:14:02.018603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:55.521 [2024-11-19 00:14:02.018611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.018877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.018890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:55.521 [2024-11-19 00:14:02.018899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:29:55.521 [2024-11-19 00:14:02.018907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.018980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.018989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:55.521 [2024-11-19 00:14:02.018997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:55.521 [2024-11-19 00:14:02.019005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.019028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.019037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:55.521 [2024-11-19 00:14:02.019045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:55.521 [2024-11-19 00:14:02.019056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.019077] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:55.521 [2024-11-19 00:14:02.023400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.023595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:55.521 [2024-11-19 00:14:02.023615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.328 ms 00:29:55.521 [2024-11-19 00:14:02.023625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.023663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.023671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:55.521 [2024-11-19 00:14:02.023679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:55.521 [2024-11-19 00:14:02.023687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.023746] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:55.521 [2024-11-19 00:14:02.023773] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:55.521 [2024-11-19 00:14:02.023811] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:55.521 [2024-11-19 00:14:02.023828] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:55.521 [2024-11-19 00:14:02.023937] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:55.521 [2024-11-19 00:14:02.023948] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:55.521 [2024-11-19 00:14:02.023959] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:55.521 [2024-11-19 00:14:02.023970] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:55.521 [2024-11-19 00:14:02.023980] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:55.521 [2024-11-19 00:14:02.023988] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:55.521 [2024-11-19 00:14:02.023998] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:55.521 [2024-11-19 00:14:02.024005] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:55.521 [2024-11-19 00:14:02.024013] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:55.521 [2024-11-19 00:14:02.024021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.024028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:55.521 [2024-11-19 00:14:02.024036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:29:55.521 [2024-11-19 00:14:02.024043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.024149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.521 [2024-11-19 00:14:02.024159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:55.521 [2024-11-19 00:14:02.024167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:29:55.521 [2024-11-19 00:14:02.024177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.521 [2024-11-19 00:14:02.024283] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:55.521 [2024-11-19 00:14:02.024296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:55.522 [2024-11-19 00:14:02.024305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.522 [2024-11-19 00:14:02.024314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:55.522 [2024-11-19 00:14:02.024330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:55.522 [2024-11-19 00:14:02.024343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:55.522 [2024-11-19 00:14:02.024350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.522 [2024-11-19 00:14:02.024365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:55.522 [2024-11-19 00:14:02.024372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:55.522 [2024-11-19 00:14:02.024379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.522 [2024-11-19 00:14:02.024387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:55.522 [2024-11-19 00:14:02.024394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:55.522 [2024-11-19 00:14:02.024400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:55.522 [2024-11-19 00:14:02.024420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:55.522 [2024-11-19 00:14:02.024426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:55.522 [2024-11-19 00:14:02.024439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.522 [2024-11-19 00:14:02.024452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:55.522 [2024-11-19 00:14:02.024458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.522 [2024-11-19 00:14:02.024472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:55.522 [2024-11-19 00:14:02.024478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.522 [2024-11-19 00:14:02.024491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:55.522 [2024-11-19 00:14:02.024498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.522 [2024-11-19 00:14:02.024512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:55.522 [2024-11-19 00:14:02.024518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.522 [2024-11-19 00:14:02.024531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:55.522 [2024-11-19 00:14:02.024537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:55.522 [2024-11-19 00:14:02.024544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.522 [2024-11-19 00:14:02.024550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:55.522 [2024-11-19 00:14:02.024557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:55.522 [2024-11-19 00:14:02.024563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:55.522 [2024-11-19 00:14:02.024575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:55.522 [2024-11-19 00:14:02.024584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024591] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:55.522 [2024-11-19 00:14:02.024599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:55.522 [2024-11-19 00:14:02.024606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.522 [2024-11-19 00:14:02.024613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.522 [2024-11-19 00:14:02.024621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:55.522 [2024-11-19 00:14:02.024628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:55.522 [2024-11-19 00:14:02.024634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:55.522 [2024-11-19 00:14:02.024641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:55.522 [2024-11-19 00:14:02.024648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:55.522 [2024-11-19 00:14:02.024655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:55.522 [2024-11-19 00:14:02.024663] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:55.522 [2024-11-19 00:14:02.024675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.522 [2024-11-19 00:14:02.024684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:55.522 [2024-11-19 00:14:02.024691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:55.522 [2024-11-19 00:14:02.024699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:55.522 [2024-11-19 00:14:02.024706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:55.522 [2024-11-19 00:14:02.024713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:55.522 [2024-11-19 00:14:02.024721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:55.522 [2024-11-19 00:14:02.024729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:55.522 [2024-11-19 00:14:02.024736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:55.522 [2024-11-19 00:14:02.024743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:55.522 [2024-11-19 00:14:02.024749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:55.522 [2024-11-19 00:14:02.024756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:55.522 [2024-11-19 00:14:02.024763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:55.522 [2024-11-19 00:14:02.024770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:55.522 [2024-11-19 00:14:02.024779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:55.522 [2024-11-19 00:14:02.024786] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:55.522 [2024-11-19 00:14:02.024795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.522 [2024-11-19 00:14:02.024804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:55.522 [2024-11-19 00:14:02.024811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:55.522 [2024-11-19 00:14:02.024819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:55.523 [2024-11-19 00:14:02.024832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:55.523 [2024-11-19 00:14:02.024840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.024848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:55.523 [2024-11-19 00:14:02.024857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:29:55.523 [2024-11-19 00:14:02.024864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.052688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.052732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:55.523 [2024-11-19 00:14:02.052743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.780 ms 00:29:55.523 [2024-11-19 00:14:02.052752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.052835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.052844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:55.523 [2024-11-19 00:14:02.052853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:55.523 [2024-11-19 00:14:02.052865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.101016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.101242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:55.523 [2024-11-19 00:14:02.101264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.095 ms 00:29:55.523 [2024-11-19 00:14:02.101274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.101329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.101340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:55.523 [2024-11-19 00:14:02.101350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:55.523 [2024-11-19 00:14:02.101358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.101475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.101487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:55.523 [2024-11-19 00:14:02.101496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:55.523 [2024-11-19 00:14:02.101504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.101637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.101651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:55.523 [2024-11-19 00:14:02.101659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:29:55.523 [2024-11-19 00:14:02.101667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.117513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.117562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:55.523 [2024-11-19 00:14:02.117575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.826 ms 00:29:55.523 [2024-11-19 00:14:02.117583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.117735] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:55.523 [2024-11-19 00:14:02.117749] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:55.523 [2024-11-19 00:14:02.117759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.117770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:55.523 [2024-11-19 00:14:02.117779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:55.523 [2024-11-19 00:14:02.117785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.130238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.130284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:55.523 [2024-11-19 00:14:02.130295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:29:55.523 [2024-11-19 00:14:02.130302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.130433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.130444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:55.523 [2024-11-19 00:14:02.130452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:55.523 [2024-11-19 00:14:02.130466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.130517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.130526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:55.523 [2024-11-19 00:14:02.130548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:55.523 [2024-11-19 00:14:02.130556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.131181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.131198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:55.523 [2024-11-19 00:14:02.131208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:29:55.523 [2024-11-19 00:14:02.131216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.131234] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:55.523 [2024-11-19 00:14:02.131247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.131255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:55.523 [2024-11-19 00:14:02.131264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:55.523 [2024-11-19 00:14:02.131272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.143849] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:55.523 [2024-11-19 00:14:02.144006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.144017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:55.523 [2024-11-19 00:14:02.144027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.714 ms 00:29:55.523 [2024-11-19 00:14:02.144035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.146144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.146178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:55.523 [2024-11-19 00:14:02.146189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:29:55.523 [2024-11-19 00:14:02.146196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.146289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.146299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:55.523 [2024-11-19 00:14:02.146308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:55.523 [2024-11-19 00:14:02.146316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.146339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.146348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:55.523 [2024-11-19 00:14:02.146361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:55.523 [2024-11-19 00:14:02.146368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.146399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:55.523 [2024-11-19 00:14:02.146408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.146416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:55.523 [2024-11-19 00:14:02.146423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:55.523 [2024-11-19 00:14:02.146430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.173289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.173348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:55.523 [2024-11-19 00:14:02.173362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.837 ms 00:29:55.523 [2024-11-19 00:14:02.173370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.173460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.523 [2024-11-19 00:14:02.173470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:55.523 [2024-11-19 00:14:02.173480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:55.523 [2024-11-19 00:14:02.173488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.523 [2024-11-19 00:14:02.174895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.425 ms, result 0 00:29:56.907  [2024-11-19T00:14:04.541Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-19T00:14:05.483Z] Copying: 31/1024 [MB] (20 MBps) [2024-11-19T00:14:06.428Z] Copying: 49/1024 [MB] (18 MBps) [2024-11-19T00:14:07.373Z] Copying: 68/1024 [MB] (18 MBps) [2024-11-19T00:14:08.760Z] Copying: 88/1024 [MB] (19 MBps) [2024-11-19T00:14:09.705Z] Copying: 108/1024 [MB] (20 MBps) [2024-11-19T00:14:10.650Z] Copying: 128/1024 [MB] (20 MBps) [2024-11-19T00:14:11.595Z] Copying: 149/1024 [MB] (20 MBps) [2024-11-19T00:14:12.538Z] Copying: 168/1024 [MB] (19 MBps) [2024-11-19T00:14:13.481Z] Copying: 189/1024 [MB] (20 MBps) [2024-11-19T00:14:14.432Z] Copying: 207/1024 [MB] (18 MBps) [2024-11-19T00:14:15.458Z] Copying: 228/1024 [MB] (20 MBps) [2024-11-19T00:14:16.403Z] Copying: 239/1024 [MB] (11 MBps) [2024-11-19T00:14:17.791Z] Copying: 251/1024 [MB] (11 MBps) [2024-11-19T00:14:18.736Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-19T00:14:19.685Z] Copying: 280/1024 [MB] (18 MBps) [2024-11-19T00:14:20.628Z] Copying: 291/1024 [MB] (10 MBps) [2024-11-19T00:14:21.572Z] Copying: 303/1024 [MB] (11 MBps) [2024-11-19T00:14:22.516Z] Copying: 314/1024 [MB] (11 MBps) [2024-11-19T00:14:23.464Z] Copying: 325/1024 [MB] (11 MBps) [2024-11-19T00:14:24.409Z] Copying: 342/1024 [MB] (17 MBps) [2024-11-19T00:14:25.799Z] Copying: 359/1024 [MB] (16 MBps) [2024-11-19T00:14:26.372Z] Copying: 371/1024 [MB] (12 MBps) [2024-11-19T00:14:27.760Z] Copying: 389/1024 [MB] (18 MBps) [2024-11-19T00:14:28.704Z] Copying: 404/1024 [MB] (15 MBps) [2024-11-19T00:14:29.649Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-19T00:14:30.593Z] Copying: 428/1024 [MB] (12 MBps) [2024-11-19T00:14:31.538Z] Copying: 443/1024 [MB] (14 MBps) [2024-11-19T00:14:32.483Z] Copying: 463/1024 [MB] (20 MBps) [2024-11-19T00:14:33.428Z] Copying: 484/1024 [MB] (21 MBps) [2024-11-19T00:14:34.372Z] Copying: 496/1024 [MB] (12 MBps) [2024-11-19T00:14:35.760Z] Copying: 515/1024 [MB] (18 MBps) [2024-11-19T00:14:36.704Z] Copying: 538/1024 [MB] (22 MBps) [2024-11-19T00:14:37.645Z] Copying: 558/1024 [MB] (19 MBps) [2024-11-19T00:14:38.590Z] Copying: 574/1024 [MB] (16 MBps) [2024-11-19T00:14:39.533Z] Copying: 595/1024 [MB] (20 MBps) [2024-11-19T00:14:40.474Z] Copying: 615/1024 [MB] (19 MBps) [2024-11-19T00:14:41.417Z] Copying: 632/1024 [MB] (17 MBps) [2024-11-19T00:14:42.806Z] Copying: 653/1024 [MB] (20 MBps) [2024-11-19T00:14:43.380Z] Copying: 668/1024 [MB] (15 MBps) [2024-11-19T00:14:44.768Z] Copying: 688/1024 [MB] (19 MBps) [2024-11-19T00:14:45.712Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-19T00:14:46.736Z] Copying: 718/1024 [MB] (18 MBps) [2024-11-19T00:14:47.385Z] Copying: 731/1024 [MB] (13 MBps) [2024-11-19T00:14:48.770Z] Copying: 747/1024 [MB] (16 MBps) [2024-11-19T00:14:49.714Z] Copying: 758/1024 [MB] (10 MBps) [2024-11-19T00:14:50.657Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-19T00:14:51.602Z] Copying: 780/1024 [MB] (11 MBps) [2024-11-19T00:14:52.546Z] Copying: 791/1024 [MB] (10 MBps) [2024-11-19T00:14:53.490Z] Copying: 802/1024 [MB] (11 MBps) [2024-11-19T00:14:54.435Z] Copying: 813/1024 [MB] (10 MBps) [2024-11-19T00:14:55.377Z] Copying: 829/1024 [MB] (15 MBps) [2024-11-19T00:14:56.764Z] Copying: 841/1024 [MB] (11 MBps) [2024-11-19T00:14:57.707Z] Copying: 851/1024 [MB] (10 MBps) [2024-11-19T00:14:58.650Z] Copying: 864/1024 [MB] (12 MBps) [2024-11-19T00:14:59.592Z] Copying: 876/1024 [MB] (12 MBps) [2024-11-19T00:15:00.537Z] Copying: 892/1024 [MB] (15 MBps) [2024-11-19T00:15:01.481Z] Copying: 916/1024 [MB] (24 MBps) [2024-11-19T00:15:02.424Z] Copying: 938/1024 [MB] (21 MBps) [2024-11-19T00:15:03.367Z] Copying: 962/1024 [MB] (24 MBps) [2024-11-19T00:15:04.756Z] Copying: 984/1024 [MB] (22 MBps) [2024-11-19T00:15:05.699Z] Copying: 1002/1024 [MB] (17 MBps) [2024-11-19T00:15:05.699Z] Copying: 1022/1024 [MB] (20 MBps) [2024-11-19T00:15:05.961Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 00:15:05.779832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.269 [2024-11-19 00:15:05.779925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:59.269 [2024-11-19 00:15:05.779942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:59.269 [2024-11-19 00:15:05.779951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.269 [2024-11-19 00:15:05.779976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:59.269 [2024-11-19 00:15:05.783061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.269 [2024-11-19 00:15:05.783108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:59.269 [2024-11-19 00:15:05.783129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.067 ms 00:30:59.269 [2024-11-19 00:15:05.783140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.269 [2024-11-19 00:15:05.783385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.269 [2024-11-19 00:15:05.783397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:59.269 [2024-11-19 00:15:05.783406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:30:59.269 [2024-11-19 00:15:05.783415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.269 [2024-11-19 00:15:05.783447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.269 [2024-11-19 00:15:05.783468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:59.269 [2024-11-19 00:15:05.783478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:59.269 [2024-11-19 00:15:05.783486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.269 [2024-11-19 00:15:05.783547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.269 [2024-11-19 00:15:05.783556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:59.269 [2024-11-19 00:15:05.783565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:59.269 [2024-11-19 00:15:05.783573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.269 [2024-11-19 00:15:05.783588] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:59.269 [2024-11-19 00:15:05.783601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:59.269 [2024-11-19 00:15:05.783702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.783995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:59.270 [2024-11-19 00:15:05.784418] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:59.270 [2024-11-19 00:15:05.784426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f4f8e29-ab62-4bf4-a4e2-751020ad819a 00:30:59.270 [2024-11-19 00:15:05.784437] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:59.271 [2024-11-19 00:15:05.784444] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:59.271 [2024-11-19 00:15:05.784466] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:59.271 [2024-11-19 00:15:05.784942] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:59.271 [2024-11-19 00:15:05.784949] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:59.271 [2024-11-19 00:15:05.784957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:59.271 [2024-11-19 00:15:05.784964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:59.271 [2024-11-19 00:15:05.784971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:59.271 [2024-11-19 00:15:05.784977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:59.271 [2024-11-19 00:15:05.784985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.271 [2024-11-19 00:15:05.784993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:59.271 [2024-11-19 00:15:05.785001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:30:59.271 [2024-11-19 00:15:05.785009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.271 [2024-11-19 00:15:05.800238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.271 [2024-11-19 00:15:05.800294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:59.271 [2024-11-19 00:15:05.800308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.206 ms 00:30:59.271 [2024-11-19 00:15:05.800316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.271 [2024-11-19 00:15:05.800713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.271 [2024-11-19 00:15:05.800739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:59.271 [2024-11-19 00:15:05.800750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:30:59.271 [2024-11-19 00:15:05.800767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.271 [2024-11-19 00:15:05.839718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.271 [2024-11-19 00:15:05.839774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:59.271 [2024-11-19 00:15:05.839788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.271 [2024-11-19 00:15:05.839797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.271 [2024-11-19 00:15:05.839874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.271 [2024-11-19 00:15:05.839885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:59.271 [2024-11-19 00:15:05.839895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.271 [2024-11-19 00:15:05.839910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.271 [2024-11-19 00:15:05.839978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.271 [2024-11-19 00:15:05.839990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:59.271 [2024-11-19 00:15:05.840000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.271 [2024-11-19 00:15:05.840009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.271 [2024-11-19 00:15:05.840027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.271 [2024-11-19 00:15:05.840037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:59.271 [2024-11-19 00:15:05.840046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.271 [2024-11-19 00:15:05.840055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.271 [2024-11-19 00:15:05.925143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.271 [2024-11-19 00:15:05.925207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:59.271 [2024-11-19 00:15:05.925220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.271 [2024-11-19 00:15:05.925229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.532 [2024-11-19 00:15:05.996207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.532 [2024-11-19 00:15:05.996273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:59.532 [2024-11-19 00:15:05.996286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.532 [2024-11-19 00:15:05.996294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.532 [2024-11-19 00:15:05.996383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.532 [2024-11-19 00:15:05.996393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:59.532 [2024-11-19 00:15:05.996403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.532 [2024-11-19 00:15:05.996412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.532 [2024-11-19 00:15:05.996454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.532 [2024-11-19 00:15:05.996464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:59.532 [2024-11-19 00:15:05.996472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.532 [2024-11-19 00:15:05.996480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.532 [2024-11-19 00:15:05.996567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.532 [2024-11-19 00:15:05.996577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:59.532 [2024-11-19 00:15:05.996585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.532 [2024-11-19 00:15:05.996593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.532 [2024-11-19 00:15:05.996619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.532 [2024-11-19 00:15:05.996628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:59.532 [2024-11-19 00:15:05.996636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.532 [2024-11-19 00:15:05.996645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.532 [2024-11-19 00:15:05.996686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.532 [2024-11-19 00:15:05.996697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:59.532 [2024-11-19 00:15:05.996706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.532 [2024-11-19 00:15:05.996714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.532 [2024-11-19 00:15:05.996758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.532 [2024-11-19 00:15:05.996768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:59.532 [2024-11-19 00:15:05.996777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.532 [2024-11-19 00:15:05.996785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.532 [2024-11-19 00:15:05.996919] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 217.050 ms, result 0 00:31:00.105 00:31:00.105 00:31:00.105 00:15:06 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:02.656 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:02.656 00:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:02.656 [2024-11-19 00:15:08.821591] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:02.657 [2024-11-19 00:15:08.822330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82851 ] 00:31:02.657 [2024-11-19 00:15:08.976049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.657 [2024-11-19 00:15:09.080975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.918 [2024-11-19 00:15:09.372590] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:02.918 [2024-11-19 00:15:09.372670] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:02.918 [2024-11-19 00:15:09.534148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.918 [2024-11-19 00:15:09.534213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:02.918 [2024-11-19 00:15:09.534234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:02.918 [2024-11-19 00:15:09.534243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.918 [2024-11-19 00:15:09.534299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.918 [2024-11-19 00:15:09.534311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:02.918 [2024-11-19 00:15:09.534323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:02.918 [2024-11-19 00:15:09.534331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.918 [2024-11-19 00:15:09.534352] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:02.918 [2024-11-19 00:15:09.535146] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:02.918 [2024-11-19 00:15:09.535180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.918 [2024-11-19 00:15:09.535190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:02.918 [2024-11-19 00:15:09.535200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:31:02.918 [2024-11-19 00:15:09.535207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.918 [2024-11-19 00:15:09.535834] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:02.918 [2024-11-19 00:15:09.535908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.919 [2024-11-19 00:15:09.535919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:02.919 [2024-11-19 00:15:09.535936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:31:02.919 [2024-11-19 00:15:09.535945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.919 [2024-11-19 00:15:09.536005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.919 [2024-11-19 00:15:09.536015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:02.919 [2024-11-19 00:15:09.536023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:02.919 [2024-11-19 00:15:09.536030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.919 [2024-11-19 00:15:09.536365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.919 [2024-11-19 00:15:09.536381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:02.919 [2024-11-19 00:15:09.536390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:31:02.919 [2024-11-19 00:15:09.536398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.919 [2024-11-19 00:15:09.536472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.919 [2024-11-19 00:15:09.536481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:02.919 [2024-11-19 00:15:09.536490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:02.919 [2024-11-19 00:15:09.536498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.919 [2024-11-19 00:15:09.536521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.919 [2024-11-19 00:15:09.536530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:02.919 [2024-11-19 00:15:09.536538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:02.919 [2024-11-19 00:15:09.536549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.919 [2024-11-19 00:15:09.536572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:02.919 [2024-11-19 00:15:09.540833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.919 [2024-11-19 00:15:09.540877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:02.919 [2024-11-19 00:15:09.540888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.268 ms 00:31:02.919 [2024-11-19 00:15:09.540897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.919 [2024-11-19 00:15:09.540933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.919 [2024-11-19 00:15:09.540943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:02.919 [2024-11-19 00:15:09.540952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:02.919 [2024-11-19 00:15:09.540961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.919 [2024-11-19 00:15:09.541027] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:02.919 [2024-11-19 00:15:09.541053] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:02.919 [2024-11-19 00:15:09.541096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:02.919 [2024-11-19 00:15:09.541114] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:02.919 [2024-11-19 00:15:09.541237] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:02.919 [2024-11-19 00:15:09.541249] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:02.919 [2024-11-19 00:15:09.541262] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:02.919 [2024-11-19 00:15:09.541274] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541284] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541293] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:02.919 [2024-11-19 00:15:09.541305] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:02.919 [2024-11-19 00:15:09.541314] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:02.919 [2024-11-19 00:15:09.541322] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:02.919 [2024-11-19 00:15:09.541333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.919 [2024-11-19 00:15:09.541343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:02.919 [2024-11-19 00:15:09.541352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:31:02.919 [2024-11-19 00:15:09.541361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.919 [2024-11-19 00:15:09.541450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.919 [2024-11-19 00:15:09.541460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:02.919 [2024-11-19 00:15:09.541470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:02.919 [2024-11-19 00:15:09.541480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.919 [2024-11-19 00:15:09.541585] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:02.919 [2024-11-19 00:15:09.541604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:02.919 [2024-11-19 00:15:09.541613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:02.919 [2024-11-19 00:15:09.541636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:02.919 [2024-11-19 00:15:09.541659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:02.919 [2024-11-19 00:15:09.541673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:02.919 [2024-11-19 00:15:09.541679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:02.919 [2024-11-19 00:15:09.541689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:02.919 [2024-11-19 00:15:09.541696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:02.919 [2024-11-19 00:15:09.541704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:02.919 [2024-11-19 00:15:09.541710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:02.919 [2024-11-19 00:15:09.541732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:02.919 [2024-11-19 00:15:09.541752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:02.919 [2024-11-19 00:15:09.541773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:02.919 [2024-11-19 00:15:09.541793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:02.919 [2024-11-19 00:15:09.541813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:02.919 [2024-11-19 00:15:09.541832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:02.919 [2024-11-19 00:15:09.541845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:02.919 [2024-11-19 00:15:09.541851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:02.919 [2024-11-19 00:15:09.541858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:02.919 [2024-11-19 00:15:09.541864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:02.919 [2024-11-19 00:15:09.541871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:02.919 [2024-11-19 00:15:09.541878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:02.919 [2024-11-19 00:15:09.541891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:02.919 [2024-11-19 00:15:09.541897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541903] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:02.919 [2024-11-19 00:15:09.541913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:02.919 [2024-11-19 00:15:09.541921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.919 [2024-11-19 00:15:09.541935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:02.919 [2024-11-19 00:15:09.541942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:02.919 [2024-11-19 00:15:09.541949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:02.919 [2024-11-19 00:15:09.541956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:02.919 [2024-11-19 00:15:09.541963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:02.919 [2024-11-19 00:15:09.541969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:02.919 [2024-11-19 00:15:09.541978] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:02.920 [2024-11-19 00:15:09.541991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.920 [2024-11-19 00:15:09.542000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:02.920 [2024-11-19 00:15:09.542007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:02.920 [2024-11-19 00:15:09.542014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:02.920 [2024-11-19 00:15:09.542021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:02.920 [2024-11-19 00:15:09.542028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:02.920 [2024-11-19 00:15:09.542035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:02.920 [2024-11-19 00:15:09.542042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:02.920 [2024-11-19 00:15:09.542049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:02.920 [2024-11-19 00:15:09.542055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:02.920 [2024-11-19 00:15:09.542062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:02.920 [2024-11-19 00:15:09.542069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:02.920 [2024-11-19 00:15:09.542076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:02.920 [2024-11-19 00:15:09.542083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:02.920 [2024-11-19 00:15:09.542090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:02.920 [2024-11-19 00:15:09.542098] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:02.920 [2024-11-19 00:15:09.542106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.920 [2024-11-19 00:15:09.542114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:02.920 [2024-11-19 00:15:09.542147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:02.920 [2024-11-19 00:15:09.542155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:02.920 [2024-11-19 00:15:09.542163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:02.920 [2024-11-19 00:15:09.542170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.920 [2024-11-19 00:15:09.542180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:02.920 [2024-11-19 00:15:09.542189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:31:02.920 [2024-11-19 00:15:09.542198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.920 [2024-11-19 00:15:09.569888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.920 [2024-11-19 00:15:09.569936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:02.920 [2024-11-19 00:15:09.569948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.645 ms 00:31:02.920 [2024-11-19 00:15:09.569956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.920 [2024-11-19 00:15:09.570040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.920 [2024-11-19 00:15:09.570049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:02.920 [2024-11-19 00:15:09.570059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:02.920 [2024-11-19 00:15:09.570071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.616100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.616167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:03.182 [2024-11-19 00:15:09.616181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.951 ms 00:31:03.182 [2024-11-19 00:15:09.616190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.616242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.616252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:03.182 [2024-11-19 00:15:09.616263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:03.182 [2024-11-19 00:15:09.616270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.616383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.616396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:03.182 [2024-11-19 00:15:09.616406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:03.182 [2024-11-19 00:15:09.616415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.616545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.616557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:03.182 [2024-11-19 00:15:09.616567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:31:03.182 [2024-11-19 00:15:09.616575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.632377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.632427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:03.182 [2024-11-19 00:15:09.632438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.780 ms 00:31:03.182 [2024-11-19 00:15:09.632446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.632605] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:03.182 [2024-11-19 00:15:09.632620] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:03.182 [2024-11-19 00:15:09.632630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.632643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:03.182 [2024-11-19 00:15:09.632652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:03.182 [2024-11-19 00:15:09.632659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.644938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.644985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:03.182 [2024-11-19 00:15:09.644997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.260 ms 00:31:03.182 [2024-11-19 00:15:09.645006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.645144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.645154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:03.182 [2024-11-19 00:15:09.645163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:31:03.182 [2024-11-19 00:15:09.645176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.645230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.645242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:03.182 [2024-11-19 00:15:09.645250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:03.182 [2024-11-19 00:15:09.645258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.645848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.645873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:03.182 [2024-11-19 00:15:09.645882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:31:03.182 [2024-11-19 00:15:09.645890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.182 [2024-11-19 00:15:09.645907] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:03.182 [2024-11-19 00:15:09.645919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.182 [2024-11-19 00:15:09.645927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:03.182 [2024-11-19 00:15:09.645936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:03.182 [2024-11-19 00:15:09.645943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.183 [2024-11-19 00:15:09.658559] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:03.183 [2024-11-19 00:15:09.658748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.183 [2024-11-19 00:15:09.658760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:03.183 [2024-11-19 00:15:09.658770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.785 ms 00:31:03.183 [2024-11-19 00:15:09.658778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.183 [2024-11-19 00:15:09.661012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.183 [2024-11-19 00:15:09.661048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:03.183 [2024-11-19 00:15:09.661058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:31:03.183 [2024-11-19 00:15:09.661065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.183 [2024-11-19 00:15:09.661172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.183 [2024-11-19 00:15:09.661184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:03.183 [2024-11-19 00:15:09.661194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:03.183 [2024-11-19 00:15:09.661202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.183 [2024-11-19 00:15:09.661227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.183 [2024-11-19 00:15:09.661236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:03.183 [2024-11-19 00:15:09.661249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:03.183 [2024-11-19 00:15:09.661257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.183 [2024-11-19 00:15:09.661289] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:03.183 [2024-11-19 00:15:09.661298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.183 [2024-11-19 00:15:09.661306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:03.183 [2024-11-19 00:15:09.661314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:03.183 [2024-11-19 00:15:09.661322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.183 [2024-11-19 00:15:09.687767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.183 [2024-11-19 00:15:09.687825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:03.183 [2024-11-19 00:15:09.687838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.424 ms 00:31:03.183 [2024-11-19 00:15:09.687846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.183 [2024-11-19 00:15:09.687944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.183 [2024-11-19 00:15:09.687955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:03.183 [2024-11-19 00:15:09.687965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:03.183 [2024-11-19 00:15:09.687973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.183 [2024-11-19 00:15:09.689364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.731 ms, result 0 00:31:04.127  [2024-11-19T00:15:11.763Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-19T00:15:12.703Z] Copying: 29/1024 [MB] (11 MBps) [2024-11-19T00:15:14.086Z] Copying: 46/1024 [MB] (16 MBps) [2024-11-19T00:15:15.027Z] Copying: 59/1024 [MB] (13 MBps) [2024-11-19T00:15:15.968Z] Copying: 74/1024 [MB] (15 MBps) [2024-11-19T00:15:16.909Z] Copying: 89/1024 [MB] (14 MBps) [2024-11-19T00:15:17.851Z] Copying: 104/1024 [MB] (14 MBps) [2024-11-19T00:15:18.849Z] Copying: 118/1024 [MB] (13 MBps) [2024-11-19T00:15:19.792Z] Copying: 137/1024 [MB] (19 MBps) [2024-11-19T00:15:20.732Z] Copying: 149/1024 [MB] (12 MBps) [2024-11-19T00:15:22.113Z] Copying: 160/1024 [MB] (10 MBps) [2024-11-19T00:15:23.054Z] Copying: 170/1024 [MB] (10 MBps) [2024-11-19T00:15:23.998Z] Copying: 182/1024 [MB] (11 MBps) [2024-11-19T00:15:24.941Z] Copying: 198/1024 [MB] (15 MBps) [2024-11-19T00:15:25.885Z] Copying: 215/1024 [MB] (17 MBps) [2024-11-19T00:15:26.830Z] Copying: 227/1024 [MB] (12 MBps) [2024-11-19T00:15:27.774Z] Copying: 237/1024 [MB] (10 MBps) [2024-11-19T00:15:28.718Z] Copying: 249/1024 [MB] (11 MBps) [2024-11-19T00:15:30.107Z] Copying: 262/1024 [MB] (12 MBps) [2024-11-19T00:15:31.052Z] Copying: 272/1024 [MB] (10 MBps) [2024-11-19T00:15:31.996Z] Copying: 286/1024 [MB] (13 MBps) [2024-11-19T00:15:32.941Z] Copying: 302/1024 [MB] (16 MBps) [2024-11-19T00:15:33.886Z] Copying: 316/1024 [MB] (13 MBps) [2024-11-19T00:15:34.830Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-19T00:15:35.772Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-19T00:15:36.716Z] Copying: 349/1024 [MB] (12 MBps) [2024-11-19T00:15:38.103Z] Copying: 364/1024 [MB] (14 MBps) [2024-11-19T00:15:39.047Z] Copying: 376/1024 [MB] (12 MBps) [2024-11-19T00:15:39.989Z] Copying: 387/1024 [MB] (10 MBps) [2024-11-19T00:15:40.932Z] Copying: 398/1024 [MB] (10 MBps) [2024-11-19T00:15:41.877Z] Copying: 408/1024 [MB] (10 MBps) [2024-11-19T00:15:42.821Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-19T00:15:43.764Z] Copying: 455/1024 [MB] (37 MBps) [2024-11-19T00:15:44.709Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-19T00:15:46.095Z] Copying: 477/1024 [MB] (10 MBps) [2024-11-19T00:15:47.039Z] Copying: 488/1024 [MB] (10 MBps) [2024-11-19T00:15:47.982Z] Copying: 498/1024 [MB] (10 MBps) [2024-11-19T00:15:48.927Z] Copying: 508/1024 [MB] (10 MBps) [2024-11-19T00:15:49.872Z] Copying: 519/1024 [MB] (10 MBps) [2024-11-19T00:15:50.853Z] Copying: 529/1024 [MB] (10 MBps) [2024-11-19T00:15:51.797Z] Copying: 541/1024 [MB] (12 MBps) [2024-11-19T00:15:52.739Z] Copying: 553/1024 [MB] (11 MBps) [2024-11-19T00:15:54.126Z] Copying: 565/1024 [MB] (12 MBps) [2024-11-19T00:15:55.069Z] Copying: 583/1024 [MB] (17 MBps) [2024-11-19T00:15:56.013Z] Copying: 593/1024 [MB] (10 MBps) [2024-11-19T00:15:56.958Z] Copying: 603/1024 [MB] (10 MBps) [2024-11-19T00:15:57.903Z] Copying: 633/1024 [MB] (29 MBps) [2024-11-19T00:15:58.848Z] Copying: 669/1024 [MB] (35 MBps) [2024-11-19T00:15:59.790Z] Copying: 696/1024 [MB] (26 MBps) [2024-11-19T00:16:00.734Z] Copying: 740/1024 [MB] (44 MBps) [2024-11-19T00:16:02.121Z] Copying: 756/1024 [MB] (15 MBps) [2024-11-19T00:16:03.065Z] Copying: 800/1024 [MB] (43 MBps) [2024-11-19T00:16:04.008Z] Copying: 847/1024 [MB] (46 MBps) [2024-11-19T00:16:04.952Z] Copying: 894/1024 [MB] (46 MBps) [2024-11-19T00:16:05.896Z] Copying: 933/1024 [MB] (38 MBps) [2024-11-19T00:16:06.839Z] Copying: 947/1024 [MB] (14 MBps) [2024-11-19T00:16:07.785Z] Copying: 961/1024 [MB] (13 MBps) [2024-11-19T00:16:08.730Z] Copying: 976/1024 [MB] (15 MBps) [2024-11-19T00:16:10.116Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-19T00:16:11.060Z] Copying: 1001/1024 [MB] (12 MBps) [2024-11-19T00:16:12.004Z] Copying: 1020/1024 [MB] (19 MBps) [2024-11-19T00:16:12.005Z] Copying: 1048336/1048576 [kB] (3492 kBps) [2024-11-19T00:16:12.267Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 00:16:12.004926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.575 [2024-11-19 00:16:12.005002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:05.575 [2024-11-19 00:16:12.005030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:05.575 [2024-11-19 00:16:12.005039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.575 [2024-11-19 00:16:12.008206] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:05.575 [2024-11-19 00:16:12.013671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.575 [2024-11-19 00:16:12.013724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:05.575 [2024-11-19 00:16:12.013738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.399 ms 00:32:05.575 [2024-11-19 00:16:12.013747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.575 [2024-11-19 00:16:12.025683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.575 [2024-11-19 00:16:12.025758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:05.575 [2024-11-19 00:16:12.025773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.670 ms 00:32:05.575 [2024-11-19 00:16:12.025783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.575 [2024-11-19 00:16:12.025814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.575 [2024-11-19 00:16:12.025824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:05.575 [2024-11-19 00:16:12.025833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:05.575 [2024-11-19 00:16:12.025842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.575 [2024-11-19 00:16:12.025903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.576 [2024-11-19 00:16:12.025913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:05.576 [2024-11-19 00:16:12.025928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:05.576 [2024-11-19 00:16:12.025944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.576 [2024-11-19 00:16:12.025960] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:05.576 [2024-11-19 00:16:12.025981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:32:05.576 [2024-11-19 00:16:12.025992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:05.576 [2024-11-19 00:16:12.026658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.026991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.027006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.027014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:05.577 [2024-11-19 00:16:12.027031] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:05.577 [2024-11-19 00:16:12.027048] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f4f8e29-ab62-4bf4-a4e2-751020ad819a 00:32:05.577 [2024-11-19 00:16:12.027058] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:32:05.577 [2024-11-19 00:16:12.027067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:32:05.577 [2024-11-19 00:16:12.027082] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:32:05.577 [2024-11-19 00:16:12.027091] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:05.577 [2024-11-19 00:16:12.027099] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:05.577 [2024-11-19 00:16:12.027110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:05.577 [2024-11-19 00:16:12.027140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:05.577 [2024-11-19 00:16:12.027148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:05.577 [2024-11-19 00:16:12.027156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:05.577 [2024-11-19 00:16:12.027164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.577 [2024-11-19 00:16:12.027181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:05.577 [2024-11-19 00:16:12.027223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:32:05.577 [2024-11-19 00:16:12.027239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.041206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.577 [2024-11-19 00:16:12.041256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:05.577 [2024-11-19 00:16:12.041270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.946 ms 00:32:05.577 [2024-11-19 00:16:12.041285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.041678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.577 [2024-11-19 00:16:12.041699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:05.577 [2024-11-19 00:16:12.041709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:32:05.577 [2024-11-19 00:16:12.041719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.078988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.577 [2024-11-19 00:16:12.079038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:05.577 [2024-11-19 00:16:12.079057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.577 [2024-11-19 00:16:12.079067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.079154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.577 [2024-11-19 00:16:12.079166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:05.577 [2024-11-19 00:16:12.079177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.577 [2024-11-19 00:16:12.079187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.079287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.577 [2024-11-19 00:16:12.079302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:05.577 [2024-11-19 00:16:12.079312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.577 [2024-11-19 00:16:12.079327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.079347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.577 [2024-11-19 00:16:12.079359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:05.577 [2024-11-19 00:16:12.079369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.577 [2024-11-19 00:16:12.079378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.165359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.577 [2024-11-19 00:16:12.165422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:05.577 [2024-11-19 00:16:12.165443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.577 [2024-11-19 00:16:12.165452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.236089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.577 [2024-11-19 00:16:12.236170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:05.577 [2024-11-19 00:16:12.236190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.577 [2024-11-19 00:16:12.236199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.236254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.577 [2024-11-19 00:16:12.236264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:05.577 [2024-11-19 00:16:12.236274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.577 [2024-11-19 00:16:12.236283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.236352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.577 [2024-11-19 00:16:12.236364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:05.577 [2024-11-19 00:16:12.236373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.577 [2024-11-19 00:16:12.236382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.577 [2024-11-19 00:16:12.236463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.578 [2024-11-19 00:16:12.236473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:05.578 [2024-11-19 00:16:12.236482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.578 [2024-11-19 00:16:12.236490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.578 [2024-11-19 00:16:12.236521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.578 [2024-11-19 00:16:12.236530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:05.578 [2024-11-19 00:16:12.236539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.578 [2024-11-19 00:16:12.236549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.578 [2024-11-19 00:16:12.236590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.578 [2024-11-19 00:16:12.236600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:05.578 [2024-11-19 00:16:12.236609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.578 [2024-11-19 00:16:12.236617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.578 [2024-11-19 00:16:12.236669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:05.578 [2024-11-19 00:16:12.236682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:05.578 [2024-11-19 00:16:12.236691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:05.578 [2024-11-19 00:16:12.236699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.578 [2024-11-19 00:16:12.236838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 235.242 ms, result 0 00:32:06.966 00:32:06.966 00:32:06.966 00:16:13 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:06.966 [2024-11-19 00:16:13.621256] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:06.966 [2024-11-19 00:16:13.621405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83518 ] 00:32:07.228 [2024-11-19 00:16:13.783554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.228 [2024-11-19 00:16:13.902915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.803 [2024-11-19 00:16:14.192327] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.803 [2024-11-19 00:16:14.192414] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.803 [2024-11-19 00:16:14.353905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.353976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:07.803 [2024-11-19 00:16:14.353997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:07.803 [2024-11-19 00:16:14.354006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.354064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.354075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:07.803 [2024-11-19 00:16:14.354086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:07.803 [2024-11-19 00:16:14.354094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.354114] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:07.803 [2024-11-19 00:16:14.355360] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:07.803 [2024-11-19 00:16:14.355419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.355430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:07.803 [2024-11-19 00:16:14.355441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:32:07.803 [2024-11-19 00:16:14.355449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.355776] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:07.803 [2024-11-19 00:16:14.355804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.355814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:07.803 [2024-11-19 00:16:14.355828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:07.803 [2024-11-19 00:16:14.355836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.356362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.356417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:07.803 [2024-11-19 00:16:14.356433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:07.803 [2024-11-19 00:16:14.356442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.356753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.356771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:07.803 [2024-11-19 00:16:14.356781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:32:07.803 [2024-11-19 00:16:14.356790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.356862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.356874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:07.803 [2024-11-19 00:16:14.356882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:07.803 [2024-11-19 00:16:14.356890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.356919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.356929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:07.803 [2024-11-19 00:16:14.356938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:07.803 [2024-11-19 00:16:14.356949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.356973] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:07.803 [2024-11-19 00:16:14.361311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.361360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:07.803 [2024-11-19 00:16:14.361370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.344 ms 00:32:07.803 [2024-11-19 00:16:14.361378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.361414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.361424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:07.803 [2024-11-19 00:16:14.361433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:07.803 [2024-11-19 00:16:14.361441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.803 [2024-11-19 00:16:14.361505] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:07.803 [2024-11-19 00:16:14.361530] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:07.803 [2024-11-19 00:16:14.361570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:07.803 [2024-11-19 00:16:14.361587] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:07.803 [2024-11-19 00:16:14.361693] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:07.803 [2024-11-19 00:16:14.361704] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:07.803 [2024-11-19 00:16:14.361715] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:07.803 [2024-11-19 00:16:14.361727] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:07.803 [2024-11-19 00:16:14.361737] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:07.803 [2024-11-19 00:16:14.361745] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:07.803 [2024-11-19 00:16:14.361756] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:07.803 [2024-11-19 00:16:14.361764] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:07.803 [2024-11-19 00:16:14.361772] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:07.803 [2024-11-19 00:16:14.361781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.803 [2024-11-19 00:16:14.361789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:07.804 [2024-11-19 00:16:14.361797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:32:07.804 [2024-11-19 00:16:14.361805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.804 [2024-11-19 00:16:14.361892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.804 [2024-11-19 00:16:14.361902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:07.804 [2024-11-19 00:16:14.361910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:07.804 [2024-11-19 00:16:14.361920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.804 [2024-11-19 00:16:14.362024] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:07.804 [2024-11-19 00:16:14.362045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:07.804 [2024-11-19 00:16:14.362054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.804 [2024-11-19 00:16:14.362066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:07.804 [2024-11-19 00:16:14.362082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:07.804 [2024-11-19 00:16:14.362098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:07.804 [2024-11-19 00:16:14.362106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.804 [2024-11-19 00:16:14.362135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:07.804 [2024-11-19 00:16:14.362144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:07.804 [2024-11-19 00:16:14.362151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.804 [2024-11-19 00:16:14.362158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:07.804 [2024-11-19 00:16:14.362166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:07.804 [2024-11-19 00:16:14.362174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:07.804 [2024-11-19 00:16:14.362195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:07.804 [2024-11-19 00:16:14.362202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:07.804 [2024-11-19 00:16:14.362217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.804 [2024-11-19 00:16:14.362230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:07.804 [2024-11-19 00:16:14.362237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.804 [2024-11-19 00:16:14.362250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:07.804 [2024-11-19 00:16:14.362256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.804 [2024-11-19 00:16:14.362271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:07.804 [2024-11-19 00:16:14.362278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.804 [2024-11-19 00:16:14.362291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:07.804 [2024-11-19 00:16:14.362297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.804 [2024-11-19 00:16:14.362311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:07.804 [2024-11-19 00:16:14.362317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:07.804 [2024-11-19 00:16:14.362323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.804 [2024-11-19 00:16:14.362329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:07.804 [2024-11-19 00:16:14.362336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:07.804 [2024-11-19 00:16:14.362344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:07.804 [2024-11-19 00:16:14.362358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:07.804 [2024-11-19 00:16:14.362368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362375] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:07.804 [2024-11-19 00:16:14.362383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:07.804 [2024-11-19 00:16:14.362391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.804 [2024-11-19 00:16:14.362399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.804 [2024-11-19 00:16:14.362407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:07.804 [2024-11-19 00:16:14.362414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:07.804 [2024-11-19 00:16:14.362421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:07.804 [2024-11-19 00:16:14.362430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:07.804 [2024-11-19 00:16:14.362437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:07.804 [2024-11-19 00:16:14.362444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:07.804 [2024-11-19 00:16:14.362453] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:07.804 [2024-11-19 00:16:14.362465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.804 [2024-11-19 00:16:14.362473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:07.804 [2024-11-19 00:16:14.362480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:07.804 [2024-11-19 00:16:14.362487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:07.804 [2024-11-19 00:16:14.362494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:07.804 [2024-11-19 00:16:14.362502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:07.804 [2024-11-19 00:16:14.362510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:07.804 [2024-11-19 00:16:14.362517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:07.804 [2024-11-19 00:16:14.362524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:07.804 [2024-11-19 00:16:14.362531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:07.804 [2024-11-19 00:16:14.362538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:07.804 [2024-11-19 00:16:14.362545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:07.804 [2024-11-19 00:16:14.362552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:07.804 [2024-11-19 00:16:14.362559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:07.804 [2024-11-19 00:16:14.362567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:07.804 [2024-11-19 00:16:14.362575] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:07.804 [2024-11-19 00:16:14.362584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.804 [2024-11-19 00:16:14.362592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:07.804 [2024-11-19 00:16:14.362599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:07.804 [2024-11-19 00:16:14.362607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:07.804 [2024-11-19 00:16:14.362615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:07.804 [2024-11-19 00:16:14.362623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.804 [2024-11-19 00:16:14.362631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:07.804 [2024-11-19 00:16:14.362639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:32:07.804 [2024-11-19 00:16:14.362647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.804 [2024-11-19 00:16:14.390924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.804 [2024-11-19 00:16:14.390975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:07.804 [2024-11-19 00:16:14.390987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.234 ms 00:32:07.804 [2024-11-19 00:16:14.390995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.804 [2024-11-19 00:16:14.391087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.804 [2024-11-19 00:16:14.391095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:07.804 [2024-11-19 00:16:14.391104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:07.804 [2024-11-19 00:16:14.391116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.804 [2024-11-19 00:16:14.440717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.804 [2024-11-19 00:16:14.440775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:07.804 [2024-11-19 00:16:14.440789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.527 ms 00:32:07.804 [2024-11-19 00:16:14.440798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.804 [2024-11-19 00:16:14.440850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.804 [2024-11-19 00:16:14.440861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:07.804 [2024-11-19 00:16:14.440870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:07.805 [2024-11-19 00:16:14.440878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.440993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.441008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:07.805 [2024-11-19 00:16:14.441017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:07.805 [2024-11-19 00:16:14.441025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.441175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.441191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:07.805 [2024-11-19 00:16:14.441202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:32:07.805 [2024-11-19 00:16:14.441210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.457942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.458000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:07.805 [2024-11-19 00:16:14.458012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.710 ms 00:32:07.805 [2024-11-19 00:16:14.458021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.458212] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:07.805 [2024-11-19 00:16:14.458228] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:07.805 [2024-11-19 00:16:14.458239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.458250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:07.805 [2024-11-19 00:16:14.458262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:07.805 [2024-11-19 00:16:14.458270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.470595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.470645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:07.805 [2024-11-19 00:16:14.470656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.306 ms 00:32:07.805 [2024-11-19 00:16:14.470664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.470799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.470808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:07.805 [2024-11-19 00:16:14.470819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:32:07.805 [2024-11-19 00:16:14.470832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.470886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.470897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:07.805 [2024-11-19 00:16:14.470906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:07.805 [2024-11-19 00:16:14.470914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.471536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.471571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:07.805 [2024-11-19 00:16:14.471582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:32:07.805 [2024-11-19 00:16:14.471590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.471609] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:07.805 [2024-11-19 00:16:14.471624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.471633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:07.805 [2024-11-19 00:16:14.471642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:07.805 [2024-11-19 00:16:14.471652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.484781] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:07.805 [2024-11-19 00:16:14.484955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.484966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:07.805 [2024-11-19 00:16:14.484978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.282 ms 00:32:07.805 [2024-11-19 00:16:14.484987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.487240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.487283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:07.805 [2024-11-19 00:16:14.487293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:32:07.805 [2024-11-19 00:16:14.487302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.487384] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:07.805 [2024-11-19 00:16:14.487839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.487862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:07.805 [2024-11-19 00:16:14.487872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:32:07.805 [2024-11-19 00:16:14.487881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.487910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.487926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:07.805 [2024-11-19 00:16:14.487936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:07.805 [2024-11-19 00:16:14.487944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.805 [2024-11-19 00:16:14.487977] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:07.805 [2024-11-19 00:16:14.487987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.805 [2024-11-19 00:16:14.487996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:07.805 [2024-11-19 00:16:14.488007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:07.805 [2024-11-19 00:16:14.488016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.067 [2024-11-19 00:16:14.515854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.067 [2024-11-19 00:16:14.515914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:08.067 [2024-11-19 00:16:14.515928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.818 ms 00:32:08.067 [2024-11-19 00:16:14.515937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.067 [2024-11-19 00:16:14.516032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.067 [2024-11-19 00:16:14.516043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:08.067 [2024-11-19 00:16:14.516055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:32:08.067 [2024-11-19 00:16:14.516063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.067 [2024-11-19 00:16:14.517613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.203 ms, result 0 00:32:09.457  [2024-11-19T00:16:16.723Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-19T00:16:18.114Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-19T00:16:19.062Z] Copying: 35/1024 [MB] (12 MBps) [2024-11-19T00:16:20.004Z] Copying: 46/1024 [MB] (11 MBps) [2024-11-19T00:16:20.950Z] Copying: 57/1024 [MB] (10 MBps) [2024-11-19T00:16:21.949Z] Copying: 77/1024 [MB] (20 MBps) [2024-11-19T00:16:22.893Z] Copying: 90/1024 [MB] (12 MBps) [2024-11-19T00:16:23.838Z] Copying: 104/1024 [MB] (13 MBps) [2024-11-19T00:16:24.781Z] Copying: 120/1024 [MB] (16 MBps) [2024-11-19T00:16:25.725Z] Copying: 138/1024 [MB] (18 MBps) [2024-11-19T00:16:27.112Z] Copying: 163/1024 [MB] (24 MBps) [2024-11-19T00:16:28.057Z] Copying: 179/1024 [MB] (15 MBps) [2024-11-19T00:16:29.001Z] Copying: 197/1024 [MB] (17 MBps) [2024-11-19T00:16:29.945Z] Copying: 211/1024 [MB] (14 MBps) [2024-11-19T00:16:30.888Z] Copying: 227/1024 [MB] (15 MBps) [2024-11-19T00:16:31.833Z] Copying: 243/1024 [MB] (15 MBps) [2024-11-19T00:16:32.778Z] Copying: 266/1024 [MB] (22 MBps) [2024-11-19T00:16:33.722Z] Copying: 278/1024 [MB] (12 MBps) [2024-11-19T00:16:35.111Z] Copying: 291/1024 [MB] (12 MBps) [2024-11-19T00:16:36.055Z] Copying: 306/1024 [MB] (15 MBps) [2024-11-19T00:16:37.000Z] Copying: 320/1024 [MB] (13 MBps) [2024-11-19T00:16:37.946Z] Copying: 339/1024 [MB] (19 MBps) [2024-11-19T00:16:38.890Z] Copying: 357/1024 [MB] (17 MBps) [2024-11-19T00:16:39.833Z] Copying: 368/1024 [MB] (11 MBps) [2024-11-19T00:16:40.779Z] Copying: 388/1024 [MB] (19 MBps) [2024-11-19T00:16:41.722Z] Copying: 399/1024 [MB] (10 MBps) [2024-11-19T00:16:43.110Z] Copying: 416/1024 [MB] (17 MBps) [2024-11-19T00:16:44.053Z] Copying: 433/1024 [MB] (16 MBps) [2024-11-19T00:16:44.996Z] Copying: 449/1024 [MB] (16 MBps) [2024-11-19T00:16:45.941Z] Copying: 471/1024 [MB] (21 MBps) [2024-11-19T00:16:46.886Z] Copying: 481/1024 [MB] (10 MBps) [2024-11-19T00:16:47.829Z] Copying: 503/1024 [MB] (21 MBps) [2024-11-19T00:16:48.775Z] Copying: 517/1024 [MB] (14 MBps) [2024-11-19T00:16:49.740Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-19T00:16:51.124Z] Copying: 547/1024 [MB] (19 MBps) [2024-11-19T00:16:52.068Z] Copying: 558/1024 [MB] (10 MBps) [2024-11-19T00:16:53.012Z] Copying: 568/1024 [MB] (10 MBps) [2024-11-19T00:16:53.957Z] Copying: 586/1024 [MB] (17 MBps) [2024-11-19T00:16:54.898Z] Copying: 605/1024 [MB] (18 MBps) [2024-11-19T00:16:55.844Z] Copying: 618/1024 [MB] (13 MBps) [2024-11-19T00:16:56.789Z] Copying: 633/1024 [MB] (15 MBps) [2024-11-19T00:16:57.735Z] Copying: 645/1024 [MB] (12 MBps) [2024-11-19T00:16:59.123Z] Copying: 667/1024 [MB] (21 MBps) [2024-11-19T00:17:00.073Z] Copying: 687/1024 [MB] (20 MBps) [2024-11-19T00:17:01.016Z] Copying: 701/1024 [MB] (13 MBps) [2024-11-19T00:17:01.962Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-19T00:17:02.906Z] Copying: 728/1024 [MB] (15 MBps) [2024-11-19T00:17:03.850Z] Copying: 744/1024 [MB] (15 MBps) [2024-11-19T00:17:04.795Z] Copying: 755/1024 [MB] (11 MBps) [2024-11-19T00:17:05.739Z] Copying: 767/1024 [MB] (11 MBps) [2024-11-19T00:17:07.128Z] Copying: 791/1024 [MB] (24 MBps) [2024-11-19T00:17:08.071Z] Copying: 808/1024 [MB] (16 MBps) [2024-11-19T00:17:09.018Z] Copying: 823/1024 [MB] (15 MBps) [2024-11-19T00:17:09.962Z] Copying: 847/1024 [MB] (23 MBps) [2024-11-19T00:17:10.906Z] Copying: 872/1024 [MB] (24 MBps) [2024-11-19T00:17:11.850Z] Copying: 884/1024 [MB] (12 MBps) [2024-11-19T00:17:12.795Z] Copying: 901/1024 [MB] (17 MBps) [2024-11-19T00:17:13.740Z] Copying: 912/1024 [MB] (11 MBps) [2024-11-19T00:17:15.128Z] Copying: 928/1024 [MB] (15 MBps) [2024-11-19T00:17:16.074Z] Copying: 941/1024 [MB] (13 MBps) [2024-11-19T00:17:17.019Z] Copying: 959/1024 [MB] (17 MBps) [2024-11-19T00:17:17.963Z] Copying: 974/1024 [MB] (15 MBps) [2024-11-19T00:17:18.944Z] Copying: 1001/1024 [MB] (26 MBps) [2024-11-19T00:17:19.205Z] Copying: 1015/1024 [MB] (13 MBps) [2024-11-19T00:17:19.777Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 00:17:19.468654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.085 [2024-11-19 00:17:19.468715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:13.085 [2024-11-19 00:17:19.468726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:13.085 [2024-11-19 00:17:19.468733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.085 [2024-11-19 00:17:19.468751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:13.085 [2024-11-19 00:17:19.471368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.085 [2024-11-19 00:17:19.471390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:13.085 [2024-11-19 00:17:19.471399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:33:13.085 [2024-11-19 00:17:19.471406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.085 [2024-11-19 00:17:19.471738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.085 [2024-11-19 00:17:19.471748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:13.085 [2024-11-19 00:17:19.471755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:33:13.085 [2024-11-19 00:17:19.471761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.085 [2024-11-19 00:17:19.471783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.085 [2024-11-19 00:17:19.471790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:13.085 [2024-11-19 00:17:19.471796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:13.085 [2024-11-19 00:17:19.471802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.085 [2024-11-19 00:17:19.471842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.086 [2024-11-19 00:17:19.471850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:13.086 [2024-11-19 00:17:19.471858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:13.086 [2024-11-19 00:17:19.471864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.086 [2024-11-19 00:17:19.471875] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:13.086 [2024-11-19 00:17:19.471884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:13.086 [2024-11-19 00:17:19.471891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.471998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:13.086 [2024-11-19 00:17:19.472381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:13.087 [2024-11-19 00:17:19.472500] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:13.087 [2024-11-19 00:17:19.472506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f4f8e29-ab62-4bf4-a4e2-751020ad819a 00:33:13.087 [2024-11-19 00:17:19.472511] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:13.087 [2024-11-19 00:17:19.472517] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:33:13.087 [2024-11-19 00:17:19.472522] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:33:13.087 [2024-11-19 00:17:19.472529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:33:13.087 [2024-11-19 00:17:19.472560] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:13.087 [2024-11-19 00:17:19.472570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:13.087 [2024-11-19 00:17:19.472576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:13.087 [2024-11-19 00:17:19.472581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:13.087 [2024-11-19 00:17:19.472585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:13.087 [2024-11-19 00:17:19.472591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.087 [2024-11-19 00:17:19.472599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:13.087 [2024-11-19 00:17:19.472605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:33:13.087 [2024-11-19 00:17:19.472610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.483498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.087 [2024-11-19 00:17:19.483521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:13.087 [2024-11-19 00:17:19.483530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.876 ms 00:33:13.087 [2024-11-19 00:17:19.483539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.483828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.087 [2024-11-19 00:17:19.483836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:13.087 [2024-11-19 00:17:19.483843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:33:13.087 [2024-11-19 00:17:19.483849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.509949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.509976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:13.087 [2024-11-19 00:17:19.509983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.509989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.510034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.510041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:13.087 [2024-11-19 00:17:19.510047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.510053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.510095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.510102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:13.087 [2024-11-19 00:17:19.510111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.510117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.510139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.510145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:13.087 [2024-11-19 00:17:19.510152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.510158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.569255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.569288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:13.087 [2024-11-19 00:17:19.569297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.569303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.617763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.617794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:13.087 [2024-11-19 00:17:19.617802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.617808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.617859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.617866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:13.087 [2024-11-19 00:17:19.617873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.617884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.617910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.617916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:13.087 [2024-11-19 00:17:19.617923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.617929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.617982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.617990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:13.087 [2024-11-19 00:17:19.617996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.618002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.618023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.618030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:13.087 [2024-11-19 00:17:19.618035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.618041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.618068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.618074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:13.087 [2024-11-19 00:17:19.618080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.618085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.618117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.087 [2024-11-19 00:17:19.618135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:13.087 [2024-11-19 00:17:19.618142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.087 [2024-11-19 00:17:19.618147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.087 [2024-11-19 00:17:19.618234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 149.560 ms, result 0 00:33:13.658 00:33:13.658 00:33:13.658 00:17:20 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:16.207 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81236 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81236 ']' 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81236 00:33:16.207 Process with pid 81236 is not found 00:33:16.207 Remove shared memory files 00:33:16.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81236) - No such process 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 81236 is not found' 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_band_md /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_l2p_l1 /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_l2p_l2 /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_l2p_l2_ctx /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_nvc_md /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_p2l_pool /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_sb /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_sb_shm /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_trim_bitmap /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_trim_log /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_trim_md /dev/hugepages/ftl_5f4f8e29-ab62-4bf4-a4e2-751020ad819a_vmap 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:16.207 00:33:16.207 real 4m53.096s 00:33:16.207 user 4m39.764s 00:33:16.207 sys 0m12.651s 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.207 00:17:22 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:16.207 ************************************ 00:33:16.207 END TEST ftl_restore_fast 00:33:16.207 ************************************ 00:33:16.207 00:17:22 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:16.207 00:17:22 ftl -- ftl/ftl.sh@14 -- # killprocess 72226 00:33:16.207 00:17:22 ftl -- common/autotest_common.sh@954 -- # '[' -z 72226 ']' 00:33:16.207 Process with pid 72226 is not found 00:33:16.207 00:17:22 ftl -- common/autotest_common.sh@958 -- # kill -0 72226 00:33:16.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72226) - No such process 00:33:16.207 00:17:22 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 72226 is not found' 00:33:16.207 00:17:22 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:16.207 00:17:22 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84223 00:33:16.207 00:17:22 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84223 00:33:16.207 00:17:22 ftl -- common/autotest_common.sh@835 -- # '[' -z 84223 ']' 00:33:16.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.207 00:17:22 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.207 00:17:22 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:16.207 00:17:22 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.207 00:17:22 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.207 00:17:22 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.207 00:17:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:16.207 [2024-11-19 00:17:22.687411] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:16.207 [2024-11-19 00:17:22.687500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84223 ] 00:33:16.207 [2024-11-19 00:17:22.836726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.468 [2024-11-19 00:17:22.911601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.039 00:17:23 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.039 00:17:23 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:17.039 00:17:23 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:17.300 nvme0n1 00:33:17.300 00:17:23 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:17.300 00:17:23 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:17.300 00:17:23 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:17.561 00:17:24 ftl -- ftl/common.sh@28 -- # stores=d723bf6c-6a2e-46e7-82f0-82d268e1ba1e 00:33:17.561 00:17:24 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:17.561 00:17:24 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d723bf6c-6a2e-46e7-82f0-82d268e1ba1e 00:33:17.561 00:17:24 ftl -- ftl/ftl.sh@23 -- # killprocess 84223 00:33:17.561 00:17:24 ftl -- common/autotest_common.sh@954 -- # '[' -z 84223 ']' 00:33:17.561 00:17:24 ftl -- common/autotest_common.sh@958 -- # kill -0 84223 00:33:17.561 00:17:24 ftl -- common/autotest_common.sh@959 -- # uname 00:33:17.561 00:17:24 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.561 00:17:24 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84223 00:33:17.820 00:17:24 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:17.821 00:17:24 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:17.821 killing process with pid 84223 00:33:17.821 00:17:24 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84223' 00:33:17.821 00:17:24 ftl -- common/autotest_common.sh@973 -- # kill 84223 00:33:17.821 00:17:24 ftl -- common/autotest_common.sh@978 -- # wait 84223 00:33:18.763 00:17:25 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:19.024 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:19.024 Waiting for block devices as requested 00:33:19.024 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:19.286 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:19.286 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:19.286 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:24.576 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:24.576 00:17:30 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:24.576 Remove shared memory files 00:33:24.576 00:17:30 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:24.576 00:17:30 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:24.576 00:17:30 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:24.576 00:17:30 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:24.576 00:17:30 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:24.576 00:17:30 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:24.576 00:33:24.576 real 18m10.647s 00:33:24.576 user 20m20.997s 00:33:24.576 sys 1m32.857s 00:33:24.576 00:17:30 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.576 00:17:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:24.576 ************************************ 00:33:24.576 END TEST ftl 00:33:24.576 ************************************ 00:33:24.576 00:17:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:24.576 00:17:31 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:24.576 00:17:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:24.576 00:17:31 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:24.576 00:17:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:24.576 00:17:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:24.576 00:17:31 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:24.576 00:17:31 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:24.576 00:17:31 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:24.576 00:17:31 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:24.576 00:17:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:24.576 00:17:31 -- common/autotest_common.sh@10 -- # set +x 00:33:24.576 00:17:31 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:24.576 00:17:31 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:24.576 00:17:31 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:24.576 00:17:31 -- common/autotest_common.sh@10 -- # set +x 00:33:25.962 INFO: APP EXITING 00:33:25.962 INFO: killing all VMs 00:33:25.962 INFO: killing vhost app 00:33:25.962 INFO: EXIT DONE 00:33:26.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:26.486 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:26.486 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:26.486 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:26.486 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:27.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:27.320 Cleaning 00:33:27.320 Removing: /var/run/dpdk/spdk0/config 00:33:27.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:27.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:27.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:27.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:27.320 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:27.320 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:27.320 Removing: /var/run/dpdk/spdk0 00:33:27.320 Removing: /var/run/dpdk/spdk_pid56971 00:33:27.320 Removing: /var/run/dpdk/spdk_pid57173 00:33:27.320 Removing: /var/run/dpdk/spdk_pid57380 00:33:27.320 Removing: /var/run/dpdk/spdk_pid57473 00:33:27.320 Removing: /var/run/dpdk/spdk_pid57518 00:33:27.320 Removing: /var/run/dpdk/spdk_pid57635 00:33:27.320 Removing: /var/run/dpdk/spdk_pid57653 00:33:27.320 Removing: /var/run/dpdk/spdk_pid57841 00:33:27.320 Removing: /var/run/dpdk/spdk_pid57928 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58018 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58124 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58215 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58255 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58291 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58362 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58446 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58871 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58935 00:33:27.320 Removing: /var/run/dpdk/spdk_pid58987 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59003 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59094 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59110 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59201 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59219 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59272 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59290 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59338 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59356 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59510 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59552 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59636 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59802 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59886 00:33:27.320 Removing: /var/run/dpdk/spdk_pid59923 00:33:27.320 Removing: /var/run/dpdk/spdk_pid60344 00:33:27.320 Removing: /var/run/dpdk/spdk_pid60442 00:33:27.321 Removing: /var/run/dpdk/spdk_pid60558 00:33:27.321 Removing: /var/run/dpdk/spdk_pid60611 00:33:27.321 Removing: /var/run/dpdk/spdk_pid60642 00:33:27.321 Removing: /var/run/dpdk/spdk_pid60720 00:33:27.321 Removing: /var/run/dpdk/spdk_pid61342 00:33:27.321 Removing: /var/run/dpdk/spdk_pid61379 00:33:27.321 Removing: /var/run/dpdk/spdk_pid61844 00:33:27.321 Removing: /var/run/dpdk/spdk_pid61942 00:33:27.321 Removing: /var/run/dpdk/spdk_pid62062 00:33:27.321 Removing: /var/run/dpdk/spdk_pid62115 00:33:27.321 Removing: /var/run/dpdk/spdk_pid62141 00:33:27.321 Removing: /var/run/dpdk/spdk_pid62166 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64004 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64141 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64145 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64157 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64204 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64208 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64220 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64271 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64275 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64287 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64332 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64336 00:33:27.321 Removing: /var/run/dpdk/spdk_pid64348 00:33:27.321 Removing: /var/run/dpdk/spdk_pid65712 00:33:27.321 Removing: /var/run/dpdk/spdk_pid65809 00:33:27.321 Removing: /var/run/dpdk/spdk_pid67212 00:33:27.321 Removing: /var/run/dpdk/spdk_pid68595 00:33:27.582 Removing: /var/run/dpdk/spdk_pid68673 00:33:27.582 Removing: /var/run/dpdk/spdk_pid68749 00:33:27.582 Removing: /var/run/dpdk/spdk_pid68829 00:33:27.582 Removing: /var/run/dpdk/spdk_pid68932 00:33:27.582 Removing: /var/run/dpdk/spdk_pid69002 00:33:27.582 Removing: /var/run/dpdk/spdk_pid69144 00:33:27.582 Removing: /var/run/dpdk/spdk_pid69492 00:33:27.582 Removing: /var/run/dpdk/spdk_pid69530 00:33:27.582 Removing: /var/run/dpdk/spdk_pid69977 00:33:27.582 Removing: /var/run/dpdk/spdk_pid70165 00:33:27.582 Removing: /var/run/dpdk/spdk_pid70269 00:33:27.582 Removing: /var/run/dpdk/spdk_pid70380 00:33:27.582 Removing: /var/run/dpdk/spdk_pid70426 00:33:27.582 Removing: /var/run/dpdk/spdk_pid70453 00:33:27.582 Removing: /var/run/dpdk/spdk_pid70756 00:33:27.582 Removing: /var/run/dpdk/spdk_pid70815 00:33:27.582 Removing: /var/run/dpdk/spdk_pid70883 00:33:27.582 Removing: /var/run/dpdk/spdk_pid71268 00:33:27.582 Removing: /var/run/dpdk/spdk_pid71414 00:33:27.582 Removing: /var/run/dpdk/spdk_pid72226 00:33:27.582 Removing: /var/run/dpdk/spdk_pid72353 00:33:27.582 Removing: /var/run/dpdk/spdk_pid72517 00:33:27.582 Removing: /var/run/dpdk/spdk_pid72618 00:33:27.582 Removing: /var/run/dpdk/spdk_pid72937 00:33:27.582 Removing: /var/run/dpdk/spdk_pid73225 00:33:27.582 Removing: /var/run/dpdk/spdk_pid73624 00:33:27.582 Removing: /var/run/dpdk/spdk_pid73812 00:33:27.582 Removing: /var/run/dpdk/spdk_pid73996 00:33:27.582 Removing: /var/run/dpdk/spdk_pid74044 00:33:27.582 Removing: /var/run/dpdk/spdk_pid74219 00:33:27.582 Removing: /var/run/dpdk/spdk_pid74244 00:33:27.582 Removing: /var/run/dpdk/spdk_pid74302 00:33:27.582 Removing: /var/run/dpdk/spdk_pid74535 00:33:27.582 Removing: /var/run/dpdk/spdk_pid74779 00:33:27.582 Removing: /var/run/dpdk/spdk_pid75330 00:33:27.582 Removing: /var/run/dpdk/spdk_pid76119 00:33:27.582 Removing: /var/run/dpdk/spdk_pid76558 00:33:27.582 Removing: /var/run/dpdk/spdk_pid77382 00:33:27.582 Removing: /var/run/dpdk/spdk_pid77524 00:33:27.582 Removing: /var/run/dpdk/spdk_pid77618 00:33:27.582 Removing: /var/run/dpdk/spdk_pid78163 00:33:27.582 Removing: /var/run/dpdk/spdk_pid78221 00:33:27.582 Removing: /var/run/dpdk/spdk_pid78928 00:33:27.582 Removing: /var/run/dpdk/spdk_pid79387 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80187 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80305 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80353 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80410 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80467 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80526 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80725 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80818 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80886 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80937 00:33:27.582 Removing: /var/run/dpdk/spdk_pid80973 00:33:27.582 Removing: /var/run/dpdk/spdk_pid81071 00:33:27.582 Removing: /var/run/dpdk/spdk_pid81236 00:33:27.582 Removing: /var/run/dpdk/spdk_pid81469 00:33:27.582 Removing: /var/run/dpdk/spdk_pid82176 00:33:27.582 Removing: /var/run/dpdk/spdk_pid82851 00:33:27.582 Removing: /var/run/dpdk/spdk_pid83518 00:33:27.582 Removing: /var/run/dpdk/spdk_pid84223 00:33:27.582 Clean 00:33:27.582 00:17:34 -- common/autotest_common.sh@1453 -- # return 0 00:33:27.582 00:17:34 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:27.582 00:17:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:27.582 00:17:34 -- common/autotest_common.sh@10 -- # set +x 00:33:27.583 00:17:34 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:27.583 00:17:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:27.583 00:17:34 -- common/autotest_common.sh@10 -- # set +x 00:33:27.844 00:17:34 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:27.844 00:17:34 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:27.844 00:17:34 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:27.844 00:17:34 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:27.844 00:17:34 -- spdk/autotest.sh@398 -- # hostname 00:33:27.844 00:17:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:27.844 geninfo: WARNING: invalid characters removed from testname! 00:33:54.430 00:17:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:56.981 00:18:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:59.529 00:18:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:01.446 00:18:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:04.752 00:18:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:06.728 00:18:12 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:08.646 00:18:14 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:08.646 00:18:14 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:08.646 00:18:14 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:08.646 00:18:14 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:08.646 00:18:14 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:08.646 00:18:14 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:08.646 + [[ -n 5035 ]] 00:34:08.646 + sudo kill 5035 00:34:08.658 [Pipeline] } 00:34:08.674 [Pipeline] // timeout 00:34:08.679 [Pipeline] } 00:34:08.694 [Pipeline] // stage 00:34:08.699 [Pipeline] } 00:34:08.713 [Pipeline] // catchError 00:34:08.722 [Pipeline] stage 00:34:08.725 [Pipeline] { (Stop VM) 00:34:08.737 [Pipeline] sh 00:34:09.023 + vagrant halt 00:34:12.328 ==> default: Halting domain... 00:34:17.632 [Pipeline] sh 00:34:17.913 + vagrant destroy -f 00:34:20.458 ==> default: Removing domain... 00:34:21.046 [Pipeline] sh 00:34:21.334 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:21.345 [Pipeline] } 00:34:21.360 [Pipeline] // stage 00:34:21.366 [Pipeline] } 00:34:21.381 [Pipeline] // dir 00:34:21.388 [Pipeline] } 00:34:21.402 [Pipeline] // wrap 00:34:21.408 [Pipeline] } 00:34:21.421 [Pipeline] // catchError 00:34:21.432 [Pipeline] stage 00:34:21.435 [Pipeline] { (Epilogue) 00:34:21.448 [Pipeline] sh 00:34:21.734 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:27.025 [Pipeline] catchError 00:34:27.027 [Pipeline] { 00:34:27.040 [Pipeline] sh 00:34:27.326 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:27.327 Artifacts sizes are good 00:34:27.338 [Pipeline] } 00:34:27.352 [Pipeline] // catchError 00:34:27.362 [Pipeline] archiveArtifacts 00:34:27.370 Archiving artifacts 00:34:27.535 [Pipeline] cleanWs 00:34:27.556 [WS-CLEANUP] Deleting project workspace... 00:34:27.556 [WS-CLEANUP] Deferred wipeout is used... 00:34:27.575 [WS-CLEANUP] done 00:34:27.577 [Pipeline] } 00:34:27.593 [Pipeline] // stage 00:34:27.598 [Pipeline] } 00:34:27.646 [Pipeline] // node 00:34:27.651 [Pipeline] End of Pipeline 00:34:27.686 Finished: SUCCESS